Friday, April 1, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


The question is related to solving a nonhomogeneous PDE with given initial conditions.

Posted: 01 Apr 2022 02:04 AM PDT

[Question explained in detail][1]

How to generate expressions to map a logical matrix to a linear array with strides?

Posted: 01 Apr 2022 02:00 AM PDT

Given a matrix, is there a general formula/algorithm for generating a mapping between the matrix and a linear representation of it, where certain elements need to be skipped/strided consistently?

In the example below, we need to read 2 elements, then stride 2 elements in the linear array, and continue until we have completed moving through the matrix (drawn in a linear manner, but has coordinates describing each element).

So can we generate an expression given the size of the matrix (m x n), the size of the linear array (p), and the amount of strides?

Example

Aronszajn-Gagliardo Theorem: case of exponent $\theta$ (Exercise 2.8.4 from Bergh-Löfström)

Posted: 01 Apr 2022 01:59 AM PDT

I posted this question on MathOverflow but didn't get any answers, so I'm trying here .

I'm working on the book "Interpolation Spaces" by Bergh and Löfström. I'm interested in solving the exercise 2.8.4 on page 33. which asks:

Let $A$ be an interpolation space of exponent $\theta$ with respect to $\overline{A}$. Prove that there is a minimal interpolation functor $F_\theta$, which is exact and of exponent $\theta$, such that $F_\theta(\overline{A})=A$.

The following hint is given:

Use the fonctional $N_\theta(x)=\sum_j\|T_j\|_{A_0,X_0}^{1-\theta}\|T_j\|_{A_1,X_1}^{\theta}\|a_j\|_A$.

The idea is to rely on the proof of the Aronszajn-Gagliardo theorem (thm 2.5.1 from the Bergh-Löfström), which can be stated as follows

Consider the category $\mathscr{B}$ of all Banach spaces. Let $A$ be an interpolation space with respect to $\overline{A}$. There is a minimal exact interpolation functor $F_0$ such that $F_0(\overline{A})=A$.

If $\overline{X}=(X_0,X_1)$, $X:=F_0(\overline{X})$ is defined by the set of elements $x\in \Sigma(\overline{X})$ which admit a representation $x=\sum_j T_j a_j$ in $\Sigma(\overline{X})$, where $T_j:\overline{A}\rightarrow \overline{X}$, $a_j\in A$. The norm of $x$ in $X$ is defined as the infimum of $N_X(x)$ over all admissible representations of $x$, where $$ N_X(x)=\sum_j\max\left(\|T_j\|_{A_0,X_0},\|T_j\|_{A_1,X_1}\right)\|a_j\|_A. $$ I can prove all the points in the same way by remplacing $N_X$ by $N_\theta$ exept one : the inclusion $F_\theta (\overline{X})\subset \Sigma(\overline{X})$. Indeed, for the classical case, they do it like this : Let $x=\sum_j T_j a_j\in X$, then $$ \begin{split} \|x\|_{\Sigma(\overline{X})}&\leq \sum_j\|T_j a_j\|_{\Sigma(\overline{X})}\\ &\leq \sum_j\inf_{a_j=a_j^{(0)}+a_j^{(1)}}\|T_j\|_{A_0,X_0}\|a_j^{(0)}\|_{A_0}+\|T_j\|_{A_1,X_1}\|a_j^{(1)}\|_{A_1}\\ &\leq \sum_j\max\left(\|T_j\|_{A_0,X_0},\|T_j\|_{A_1,X_1}\right)\|a_j\|_{\Sigma(\overline{A})}\\ &\leq C\sum_j\max\left(\|T_j\|_{A_0,X_0},\|T_j\|_{A_1,X_1}\right)\|a_j\|_{A}, \end{split} $$ since $A\subset \Sigma(\overline{A})$. Therefore, $\|x\|_{\Sigma(\overline{X})}\leq C\|x\|_{X}$. But now for the $\theta$ case, I can't do the same because I can't prove that $$ \inf_{a_j=a_j^{(0)}+a_j^{(1)}}\|T_j\|_{A_0,X_0}\|a_j^{(0)}\|_{A_0}+\|T_j\|_{A_1,X_1}\|a_j^{(1)}\|_{A_1}\leq C\|T_j\|_{A_0,X_0}^{1-\theta}\|T_j\|_{A_1,X_1}^{\theta}\|a\|_{\Sigma(\overline{A})}, $$ and I don't even know if it is true (we need $C$ independant of $j$ and of the representation of $x$)

So my question is: do you have any idea on how to prove the previous inequality or on how to prove that $$ F_\theta(\overline{X})\subset \Sigma(\overline{X}), $$ eventually in another way?

Thanks a lot.

Find solutions to the square root of a state transition matrix

Posted: 01 Apr 2022 01:57 AM PDT

I am faced with a MDP problem with latent variables. I hope to recover the transition matrix from the observational distribution.

Specifically, I have three random variables $U_0,U_1,U_2$. They all take discrete values in $n$-states. Their generating mechanism is $U_0 \rightarrow U_1 \rightarrow U_2$.

In my problem, $U_0$ and $U_2$ are observable, $U_1$ is latent.

We know the data generating mechanism in $U_0 \rightarrow U_1$ and $U_1 \rightarrow U_2$ are the same. That is, the transition matrix $P_{01}$ from $U_0$ to $U_1$, is the same as $P_{12}$. Note that matrics are defined as $\{P_{01}\}_{ij}=pr(u_1^j|u_0^i)$, $\{P_{12}\}_{ij}=pr(u_2^j|u_1^i)$, $\{P_{02}\}_{ij}=pr(u_2^j|u_0^i)$, $i,j=\{1,2,...,n\}$.

My interest is to recover the state transition matrix $P_{12}$ from the observational distribution $P_{02}$.

Because $P_{02}=P_{01} \cdot P_{12}$ and $P_{01}=P_{12}$, we have $P_{02}=P_{12} \cdot P_{12}$. So, we need to compute the square root of $P_{02}$.

How many legal solutions there can be? Is it true that only one legal solution exists? Can you offer a counter-example where there are more than one legal solutions? (by saying legal, I mean elements of $P_{02}$ and $P_{12}$ are all probability, $(0<p<1)$ in my case; every column of these two matrices must sum to 1.)

How can I obtain these square root solutions?

I found a similar problem here, but their setting is different.

Sketch the phase portrait of the linear dynamical system $\dot{x} = −x + 2y$, $\dot{y} = 2x − 4y$

Posted: 01 Apr 2022 01:48 AM PDT

Sketch the phase portrait of the linear dynamical system
$\dot{x} = −x + 2y$
$\dot{y} = 2x − 4y$

In this case i found the eignevalues to be $\lambda_1 = 0 , \lambda_2 = -5$ so $\lambda_1\lambda_2 = 0$. What do i do in this case? Its different from all the questions i had before since i don't know how to classify the equilibrium point when $\lambda_1\lambda_2 = 0$.

Understanding orthogonal projection of $\vec{y}$ on to span of orthogonal set, with an example

Posted: 01 Apr 2022 02:03 AM PDT

This is to verify if there's an issue with my understanding or if there's issue with the textbook. There seem to be also a previous question here on exactly the same, hoping to help myself and future students.

I did the following and got a different answer to the question, I also verified my answer by testing it for orthogonality against each of it's basis ($u_1 , u_2$ are orthogonal): enter image description here

Textbook answer:

enter image description here


My working:

$\hat{\vec{y}}$ = projection of $\vec{y}$

$\hat{\vec{y}} = \frac{\vec{y}\cdot \vec{u_1}}{(\vec{u_1})^2} + \frac{\vec{y}\cdot \vec{u_2}}{(\vec{u_2})^2}$

$\hat{\vec{y}} = \frac{\left(\begin{pmatrix}-1\\ \:\:\:\:3\\ \:\:\:\:6\end{pmatrix}\cdot \begin{pmatrix}-5\\ \:\:\:\:-1\\ \:\:\:\:2\end{pmatrix}\right)}{\left(\begin{pmatrix}-5\\ \:\:\:\:\:-1\\ \:\:\:\:\:2\end{pmatrix}\cdot \begin{pmatrix}-5\\ \:\:\:\:\:-1\\ \:\:\:\:\:2\end{pmatrix}\right)}\cdot \begin{pmatrix}-5\\ \:\:\:\:\:-1\\ \:\:\:\:\:2\end{pmatrix}+\frac{\left(\begin{pmatrix}-1\\ \:\:\:3\\ \:\:\:6\end{pmatrix}\cdot \begin{pmatrix}1\\ \:\:\:-1\\ \:\:\:2\end{pmatrix}\right)}{\left(\begin{pmatrix}1\\ \:\:\:\:-1\\ \:\:\:\:2\end{pmatrix}\cdot \begin{pmatrix}1\\ \:\:\:\:-1\\ \:\:\:\:2\end{pmatrix}\right)}\cdot \begin{pmatrix}1\\ \:\:\:-1\\ \:\:\:2\end{pmatrix} = \begin{pmatrix}-1\\ -\frac{9}{5}\\ \frac{18}{5}\end{pmatrix}$


Verify orthogonality:

$\vec{z}=$ vector height of the right angle triangle who's base is the projection ($\hat{\vec{y}}$)

$\vec{z}= \vec{y} - \hat{\vec{y}}=\begin{pmatrix}-1\\ \:3\\ \:6\end{pmatrix}-\begin{pmatrix}-1\\ \:\:-\frac{9}{5}\\ \:\:\frac{18}{5}\end{pmatrix} = \begin{pmatrix}0\\ \frac{24}{5}\\ \frac{12}{5}\end{pmatrix}$

$\vec{z} \cdot \vec{u_1}=0, \vec{z} \cdot \vec{u_2}=0 $

But the textbook answer as the projection is = the vector ($\vec{y}$) itself which by arithmetic doesn't seem to be orthogonal? $\begin{pmatrix}-1\\ \:\:3\\ \:\:6\end{pmatrix}\cdot \begin{pmatrix}-5\\ \:-1\\ \:2\end{pmatrix} = 0 $

Evaluating $\lim_{n \rightarrow \infty} n^{- \frac{1}{2} (1+ \frac{1}{2})} ( 1^{1} 2^{2}\cdots n^{n})^{ \frac{1}{n^{2}} }$

Posted: 01 Apr 2022 01:51 AM PDT

$$\lim_{n \rightarrow \infty} n^{- \frac{1}{2} (1+ \frac{1}{2})} ( 1^{1} 2^{2}\cdots n^{n})^{ \frac{1}{n^{2}} }$$

Does every proof that abelian groups are amenable rely on the axiom of choice?

Posted: 01 Apr 2022 01:51 AM PDT

Question in the title. So far, any proof I've seen that all, say countable discrete, abelian groups are amenable requires some sort of argument or technique that relies on choice, i.e. using Marcov-Kakutani or convergence w.r.t. ultrafilters, so I guess every proof needs choice. Is that the case?

To be clear, I don't have anything against choice, I'm just curious ;-)

Brezis's Ex 3.32: Projection on the domain of a proper convex l.s.c. map

Posted: 01 Apr 2022 01:39 AM PDT

I'm doing Ex 3.32.(5. and 6.) in Brezis's book of Functional Analysis. Could you have a check on my attempt?

Let $(E, |\cdot|)$ be a uniformly convex Banach space and $C \subset E$ a nonempty.

  1. Prove that for every $x \in E$, $$ \inf _{y \in C}|x-y| $$ is achieved by some unique point in $C$, denoted by $P x$.
  2. Prove that every minimizing sequence $\left(y_{n}\right)$ in $C$ converges strongly to $P x$.
  3. Prove that the map $x \mapsto P x$ is continuous from $E$ strong into $E$ strong.
  4. More precisely, prove that $P$ is uniformly continuous on bounded subsets of $E$.

Let $\varphi: E \rightarrow(-\infty,+\infty]$ be a convex l.s.c. function, $\varphi \not \equiv+\infty$.

  1. Prove that for every $x \in E$ and every integer $n \geq 1$, $$ \inf _{y \in E}\left\{n|x-y|^{2}+\varphi(y)\right\} $$ is achieved at some unique point, denoted by $y_{n}$.
  2. Prove that $y_{n} \underset{n \rightarrow \infty}{\longrightarrow} P x$, where $C=\overline{D(\varphi)}$.

I post my proof separately as below answer. This allows me to subsequently remove this question from unanswered list.

What is the shortest way to enclose $n$ circles?

Posted: 01 Apr 2022 01:54 AM PDT

What is the shortest way to enclose $n$ circles of radius $1$? Is there a formula for the length of the shortest path? I have tried to find the shortest way, up to $n\leq10$. Below are the shortest ways of those I have found, but there may be shorter ways than these.

n length
1 $2\pi$
2 $2\pi+4$
3 $2\pi+6$
4 $2\pi+8$
5 $2\pi+10$
6 $2\pi+8+2\sqrt{3}$
7 $2\pi+12$
8 $2\pi+14$
9 $2\pi+12+2\sqrt{3}$
10 $2\pi+16$

Path and circles

Why does squaring $\sqrt{x+2}\ge x$ misses an interval?

Posted: 01 Apr 2022 01:52 AM PDT

If I square both sides of $\sqrt{x+2}\ge x$ I get,

$$x^2-x-2\le0\quad\Rightarrow\quad (x-2)(x+1)\le0\quad\Rightarrow\quad x\in[-1,2]$$ But the interval $[-2,-1)$ should be included in the solution which is missed here. I'm wondering what's going wrong in the above approach that misses an interval?

A query on the eigenvalues of anti-involution matrix

Posted: 01 Apr 2022 01:43 AM PDT

An involution matrix $A$ is defined by the condition $$ A^2 = I \tag{1} $$

The eigenvalues of an involution matrix $A$ are the roots of unity.

Generalizing, an $m$-involution matrix $A$ is defined by the condition $$ A^m = I \tag{2} $$

The eigenvalues of an $m$-involution matrix $A$ are the $m$th roots of unity.

In a similar manner, an anti-involution matrix $A$ can be defined as $$ A^2 = - I $$

(I hope that this is the standard definition)

An example of an anti-involution matrix is $$ A = \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \\ \end{array} \right] $$

The eigenvalues of the above matrix $A$ are: $\pm j$.

The characteristic equation of this $2 \times 2$ matrix $A$ is $$ \lambda^2 + 1 = 0 $$

Is it possible to show that the eigenvalues of an anti-involution matrix are the square roots of $-1$ ?

In a similar way, if we define anti-m-involution matrix $A$ as $$ A^m = - I, $$ can we show that its eigenvalues are the $m$th roots of $-1$?

I don't ask about the eigenvectors of anti-involution matrices as I am not aware of any results for the eigenvectors of involution matrices.. Kindly help me on their eigenvalues. Thank you.

Find the probability x<1/4

Posted: 01 Apr 2022 01:59 AM PDT

Given that x<0, find the probability x<1/4 F(x)=a cosπx (-1)/2<x<1/2, 0 oetherwise.

I got an answer of 0.3535. Is this correct?

Why is this operator ($\hat{a}$)well defined (quantum physics)?

Posted: 01 Apr 2022 02:02 AM PDT

My problem : if $x$ is a vector with components in that Hilbert basis $(x_i)$ (those coefficients must form a $l^2$ sequence according to basic Hilbert theory), then its "image" would have coefficients $\sqrt{n+1}x_{n+1}$ which is clearly not guaranteed to be a $l^2$ sequence due to the increasing unbounded $\sqrt{n+1}$ factor so the image is not well defined.

enter image description here

Prove that $P$ is invertible.

Posted: 01 Apr 2022 01:46 AM PDT

Let $W$ be a subspace of $\Bbb R^m$ with dimension $n$. Let {$u_1$$,...,u_n$} and {$w_1$$,...,w_n$} be bases for $W$. Let $P=(p_{i,j})$ be an $n\times n$ matrix such that $$w_i=\sum_{j=1}^n p_{j,i}u_j=p_{1,i}u_1+...+p_{n,i}u_n, 1\leq i\leq n,$$ i.e. $(w_1 w_2...w_n)=(u_1 u_2 ... u_n)P$ as a block matrix multiplication. Equivalently, $W=UP$ where columns of matrix $W$ are $w_1,...w_n,$ and the columns of matrix $U$ are $u_1,...,u_n$. Prove that $P$ is invertible.

I'd like to ask how to write $u_k$ as a linear combination of $w_1,...,w_n$, for each $k, 1\leq k \leq n$? How to solve this problem?

Quotient field of Polynomial ring over prime ideal is field of rational function over the prime ideal?

Posted: 01 Apr 2022 01:54 AM PDT

I'm reading a book about Algebraic geometry and, I met the problem in context of transcendental extension.

Let $K$ be a field, $P$ be a prime ideal in polynomial ring $K[X_1,...,X_n]$, and L = quotient field of $K[X_1,...,X_n]/P$.

Then, I don't know how L is it. I'm not good at about quotient ring. I know just the definition of the quotient ring and some examples.

Is it $ K(X_1,...,X_n)/P$ If so, how can I show it?

($K(X_1,...,X_n)$ is a quotient field of $K[X_1,...,X_n]$.)

The book then continues that transcendence degree of $L/K$ is finite... (assuming $Tr.d_K \mathbb{C} = \infty)$ but I am not sure because of this point.

Find the $k$-th derivative at $x=0$ of $f(1/n)=\ln{(1+2n)}-\ln{n}$

Posted: 01 Apr 2022 01:52 AM PDT

Suppose $f(x)$ is a real function on $[-1, 1]$ that satisfies

(1) $f(\frac{1}{n})=\ln{(1+2n)}-\ln{n}, n=1,2,\cdots$;

(2) derivative of any order exists;

(3) $\exists M>0$ such that $\left|f^{(n)}(x)\right|\le n!M$.

Find $f^{(k)}(0), k=0,1,\cdots$.

Proving by induction: If $x + \frac{1}{x} = 2 \cos(\cos\theta)$, then $x^n + \frac{1}{x^n} = 2\cos(\cos(n\theta))$

Posted: 01 Apr 2022 01:51 AM PDT

A math competition for 11 and 12th graders asked us to solve this.

Given: $$x + \frac{1}{x} = 2 \cos(\cos\theta)$$

Prove: $$x^n + \frac{1}{x^n} = 2\cos(\cos(n\theta))$$

We just managed to prove it works for n =1 by using the equation given, but thats about it

Nobody managed to solve it, not even our teacher after the fact.

If a point is selected inside a rectangle what's the probability that the point is closer to center than vertex?

Posted: 01 Apr 2022 01:49 AM PDT

If a point is selected inside a rectangle what's the probability that the point is closer to center than vertex?

I thought of working on this problem using the concept of area.

If I draw two concentric rectangles where length and width of inner rectangle are half of the outer one, then the probability should be the ratio of areas of both rectangles.

Therefore $P(E) = \dfrac{l\times B}{2l\times 2b} = 1/4$

But the answer given in my book is $1/2$. What's the problem here?

Need help solving this set of differential equations of motion

Posted: 01 Apr 2022 02:00 AM PDT

I'm a masters student and I have a set of Differential equations I need to solve for my research. For context they are equations of motion for scalar fields coupled to gravity, though I suppose that's not really relevant to the maths involved.

Specifically, they take the form $$\chi''(\rho)+2A'(\rho)\chi'(\rho)-(\chi'(\rho))^2-12=0$$ $$A''(\rho)-A'(\rho)\chi'(\rho)+2(A'(\rho))^2-24=0$$ $$2(A'(\rho))^2-2(\chi'(\rho))^2-24=0$$

Now, I already know that there are solutions where $\chi$ and A are constant ($\chi=2$, $A=4$). However, I know from existing literature using a different number of dimensions that there should also be another solution taking the form of a sum of (natural) logarithms of hyperbolic functions, such that when $\rho\rightarrow\infty$, $\chi$ and A tend to the constant solutions. The problem is that for the life of me I cannot figure out how to find this additional solution. I have tried writing out trial solutions of the form $$\chi=\chi_0+\chi_1\log(\cosh(\chi_2\rho))+\chi_3\log(\sinh(\chi_4\rho))$$ $$A=A_0+A_1\log(\cosh(A_2\rho))+A_3\log(\sinh(A_4\rho))$$ But despite my best efforts I haven't been able to decipher any kind of solution or determine the constants so far. I feel a little embarrassed because I should probably be able to solve this kind of thing by now but I desperately need some help here. Im not even really sure where to start, apart from redefining $\chi_4$ and $A_4$ in terms of the other constants. I'm using mathematica if that helps at all. Apologies if this is a stupid question.

beta and gamma helping solve an integral

Posted: 01 Apr 2022 01:55 AM PDT

I want to compute the following integral: $$I= \int_0 ^1 e^x\frac{x^2}{\sqrt[4]{1-x^4}} dx$$

I was thinking about using the gamma function:

$$Γ(z) = \int_0 ^∞ e^ {−t}t^{ z−1} dt$$ and $$Γ(z + 1) = zΓ(z)$$

and the beta function:

$$B(u, v) = \int _0^ 1 t^{ u−1} (1 − t)^{ v−1} dt$$

However, there is an $$e^z$$ near $$\frac{x^2}{\sqrt[4]{1-x^4}}$$ so it seems that I cannot use them in this, or am I wrong?

How may I compute that definite integral?

why is 1+2 exactly 3 by the dedekind cut way of definding real number?

Posted: 01 Apr 2022 01:53 AM PDT

Suppose by the definition of Dedekind cut, every element must be close downwards meaning it must add up every single element on $Q$ before the element. So when we define additions under real in this case, that is : $$x+y = \{r\in Q :\exists t \in x,\exists s\in y,(r < t+s)\} $$

then how do we tell that 1+2 is exactly 3 by intuition? thank you!

proving (angle-side-longer-side congruence)

Posted: 01 Apr 2022 01:39 AM PDT

consider ∆ABC and ∆DEF such that ∠A ≅ ∠D, (AB) ̅ ≅ (DE) ̅, (BC) ̅≅ (EF) ̅ and assume in addition that (BC) ̅ ≥ (AB) ̅ Then ∆ABC ≅ ∆DEF

What I have so far:

if (BC) ̅ = (AB) ̅ then by isosceles triangle theorem ∠ A ≅ ∠ C. Which means by transitive property ∠A ≅ ∠C ≅ ∠D ≅ ∠F. Then by AAS congruence ∆ABC ≅ ∆DEF. If segment (BC) ̅ > (AB) ̅ then we can construct a perpendicular bisector from vertex B that intersects (AC) ̅ at point X with ∠BXA and ∠BXC being right angles. With (AB) ̅ ≅ (DE) ̅, ∠A ≅ ∠D and ∠AXB ≅ ∠DXE then ∆ABX ≅ ∆DEX by AAS congruence. With (BC) ̅ ≅ (EF) ̅, ∠A ≅ ∠D, and ∠CXB ≅ ∠FYE then ∆BCX ≅ ∆FEY by AAS congruence. Then by congruence and betweenness theorem for points (AX) ̅ ≅ (DY) ̅, (XC) ̅ ≅ (YF) ̅, by substitution AX + XC = AC and DY + YF = DF then AC ≅ DF then by SSS congruence ∆ ABC ≅ ∆ DEF

this is the line I'm stuck with:

If segment (BC) ̅ > (AB) ̅ then we can construct a perpendicular bisector from vertex B that intersects (AC) ̅ at point X with ∠BXA and ∠BXC being right angles.

I don't know if I can actually construct this bisector from the vertex, since this bisector may not exist on every triangle, and if I can't then the rest of my proof is thrown out the window. So I need some confirmation on if this holds on every triangle and if it doesn't I could definitely use some helping proving these two triangles are congruent. Thanks

Matrix representation of operators on $\ell^2$.

Posted: 01 Apr 2022 01:43 AM PDT

Let $A=(a_{nk})_{n,k=1}^{\infty}$ be an infinite matrix that defines a bounded operator on $\ell^2(\mathbb{N})$ by matrix multiplication. Suppose that $B=(b_{nk})_{n,k=1}^{\infty}$ satisfies $|b_{nk}|\leq |a_{nk}|$ for all $n,k\in \mathbb{N}$.

Question: Does $B$ act boundedly on $\ell^2(\mathbb{N})$ by matrix multiplication?

Remark: It's easy to construct examples where $\|B\|_{\ell^2\to\ell^2}>\|A\|_{\ell^2\to\ell^2}$ but I haven't been able to construct a pair of matrices such that the operator induced by $B$ is unbounded.

How to prove $\langle x^{2n+1}: n\in \mathbb{N}\rangle$ is dense in $\{ f\in C([0,1]): f(0)=0\}$

Posted: 01 Apr 2022 01:59 AM PDT

I'm trying to prove that $\langle x^{2n+1}: n\in \mathbb{N}_0\rangle$ is dense in $\{ f\in C([0,1]): f(0)=0\}$ without the use of the Müntz–Szász theorem.

I know how to prove this for even exponents by using the Stone-Weierstrass theorem, however $\langle x^{2n+1}:n\in \mathbb{N}\rangle$ isn't an algebra so the same proof won't work. I then thought to show there is a sufficiently good approximation for an even exponent polynomial by odd exponent ones but that didn't seem to go anywhere.

I've been given a hint to prove it first for functions with the added condition of being differentiable at $0$ but I'm not sure how that makes the problem any easier. I'm quite stuck so any help would be really appreciated.

isomorphic pullback through projections determine isomorphic vector bundles

Posted: 01 Apr 2022 01:44 AM PDT

The context is the following: working with $\xi = (E,p_E,X),\eta = (F,p_F,X)$ two vector bundles, denote $\pi_0:X \times D_0 \to X,\pi_{\infty} : X \times D_{\infty} \to X$ the projections over the first factor, where $D_0 := \{z \in \mathbb{C} | |z| \leq 1\},D_{\infty} := \{z \in \mathbb{C} | |z| \geq 1\} \cup \{\infty\}$ so that the clutching costruction, denoted by $[\xi,u]$ is defined for some $u$. In general is it true that if $[\xi,u] \simeq [\eta,u]$ as vector bundles over $X \times S^2$ then $\eta \simeq \xi$ as vector bundles to $X?$ It seems to me due to the fact that since $u$ is the same this implies that in particular $\pi_0(\xi) \simeq \pi_0(\eta), \pi_{\infty}(\xi) \simeq \pi_{\infty}(\eta)$. Since $\pi_0$ and $\pi_{\infty}$ are projections, an isomorphism of two such that vector bundles determines an isomorphism $w : \xi \to \eta$.

Is my intuition correct or otherwise, what am I missing? Any help would be appreciated.

Edit : The question could be rephrase providing less content as follows, is it true that if $\xi = (E,p_E,X)$ is a vector bundle and $f : B\times X \to X$ is the projection on the second factor then it holds $$f^*(\xi) \simeq (E \times B, p_E\times id, X \times B)$$ and and if $\eta = (F,p_F,X)$ is another vector bundle an isomorphism between $f^*(\xi) \simeq f^*(\eta)$, this determines an isomorphism of $\xi \simeq \eta$?

$\frac{2-a}{a+\sqrt{bc+abc}}+\frac{2-b}{b+\sqrt{ca+abc}}+\frac{2-c}{c+\sqrt{ab+abc}}\ge1$

Posted: 01 Apr 2022 01:59 AM PDT

For $a,b,c\ge0: ab+bc+ca+abc=4$ then: $$\frac{2-a}{a+\sqrt{bc+abc}}+\frac{2-b}{b+\sqrt{ca+abc}}+\frac{2-c}{c+\sqrt{ab+abc}}\ge1$$

I used the condition and get: $a+\sqrt{bc+abc}=a+\sqrt{4-a(b+c)}\le a+2$ So we need to prove that: $$\frac{2-a}{a+2}+\frac{2-b}{b+2}+\frac{2-c}{c+2}\ge1$$ I tried to full expand but the rest seems complicated for me. Can anyone help me full my idea? Every thinking is welcomed, thanks!

How to find the implicit differentiation of a fraction?

Posted: 01 Apr 2022 02:00 AM PDT

I need to determine the whether point P is a local max/min or stationary point. So I need to take the second derivative.

Question: $5x^2+6xy+5y^2 = 8$

I figured out that the first derivative is: $\frac{dy}{dx} = \frac{-10x-6y}{6x+10y}$

Therefore the second derivative is: $\frac{d^2y}{dx^2}$ = $\frac{dy}{dx}( \frac{-10x-6y}{6x+10y})$

But I am not sure how to proceed. I know that you add $\frac{dy}{dx}$ every time you differetiate a $y$ (at least this is the way I have been taught, but how do you go about and do this? There are fractions and $x$ and $y$ variables are on top and bottom. Thanks

How to take integral of absolute value(x) on a Casio fx-991ms

Posted: 01 Apr 2022 02:05 AM PDT

I don't know if this is the correct place for this question, if there is a more appropriate forum please let me know.

Trying to solve the probability density function which has an absolute value in it, how can I integrate an absolute value on a Casio fx-991ms calculator? In order to enter absolute value (x) I need to switch to Mode 2, but once I do that everything I entered for the integral disappears. If I try to do this in Mode 2 to begin with than the integral symbol simply does not work in Mode 2.

Prob. 18, Chap. 5 in Baby Rudin: Another Form of Taylor's Theorem

Posted: 01 Apr 2022 01:41 AM PDT

Here is Prob. 18, Chap. 5 in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition:

Suppose $f$ is a real function on $[a, b]$, $n$ is a positive integer, and $f^{(n-1)}$ exists for every $t \in [a, b]$. Let $\alpha$, $\beta$, and $P$ be as in Taylor's theorem (5.15). Define $$ Q(t) = \frac{ f(t) - f(\beta) }{ t- \beta} $$ for $t \in [a, b]$, $t \neq \beta$, differentiate $$ f(t) - f(\beta) = (t-\beta) Q(t) $$ $n-1$ times at $t = \alpha$, and derive the following version of Taylor's theorem: $$ f(\beta) = P(\beta) + \frac{Q^{(n-1)}(\alpha)}{(n-1)!} (\beta - \alpha)^n. $$

And, here is Theorem 5.15 in Baby Rudin, 3rd edition:

Suppose $f$ is a real function on $[a, b]$, $n$ is a positive integer, $f^{(n-1)}$ is continuous on $[a, b]$, and $f^{(n)}(t)$ exists for every $t \in (a, b)$. Let $\alpha$, $\beta$ be distinct points of $[a, b]$, and define $$ P(t) = \sum_{k=0}^{n-1} \frac{f^{(k)}(\alpha)}{k!} \left( t-\alpha \right)^k.$$ Then there exists a point $x$ between $\alpha$ and $\beta$ such that $$ f(\beta) = P(\beta) + \frac{f^{(n)}(x)}{n!} (\beta - \alpha )^n.$$

An Attempt:

For all $t \in [a, b]$, we have $$ \begin{align} f(t) - f(\beta) &= ( t-\beta) Q(t), \tag{1} \\ f^\prime(t) &= Q(t) + (t-\beta) Q^\prime(t), \tag{2} \\ f^{\prime\prime}(t) &= 2Q^\prime(t) + (t-\beta) Q^{\prime\prime}(t), \tag{3} \\ f^{(3)}(t) &= 3 Q^{\prime\prime}(t) + (t-\beta)Q^{(3)}(t), \tag{4} \\ f^{(4)}(t) &= 4 Q^{(3)}(t) + (t-\beta) Q^{(4)}(t), \tag{5} \\ \cdots &= \cdots \\ f^{(n-1)}(t) &= (n-1) Q^{(n-2)}(t) + (t-\beta) Q^{(n-1)}(t). \tag{*} \end{align} $$ So, for $t = \alpha$, the above chain of equations yields $$ \begin{align} f(\beta) &= f(\alpha) + Q(\alpha) (\beta - \alpha ) \qquad \mbox{ [ using (1) ] } \\ &= f(\alpha) + \left( f^\prime(\alpha) + (\beta - \alpha) Q^\prime(\alpha) \right) (\beta - \alpha ) \qquad \mbox{ [ using (2) ] } \\ &= f(\alpha) + f^\prime(\alpha) (\beta - \alpha) + Q^\prime(\alpha) (\beta - \alpha)^2 \\ &= f(\alpha) + f^\prime(\alpha) (\beta - \alpha) + \left[ \frac{1}{2} \left( f^{\prime\prime}(\alpha) + (\beta - \alpha) Q^{\prime\prime}(\alpha) \right) \right] (\beta - \alpha)^2 \qquad \mbox{ [ using (3) ] } \\ &= f(\alpha) + \frac{f^\prime(\alpha)}{1!} (\beta - \alpha) + \frac{f^{\prime\prime}(\alpha)}{2!} (\beta - \alpha)^2 + \frac{Q^{\prime\prime}(\alpha)}{2!} (\beta - \alpha)^3 \\ &= f(\alpha) + f^\prime(\alpha) (\beta - \alpha) + \frac{f^{\prime\prime}(\alpha)}{2} (\beta - \alpha)^2 + \frac{\frac{1}{3} \left( f^{(3)}(\alpha) + (\beta - \alpha) Q^{(3)}(\alpha) \right) }{2} (\beta - \alpha)^3 \qquad \mbox{ [ using (4) ] } \\ &= f(\alpha) + \frac{f^\prime(\alpha)}{1!} (\beta - \alpha) + \frac{f^{\prime\prime}(\alpha)}{2!} (\beta - \alpha)^2 + \frac{f^{(3)}(\alpha)}{3!}(\beta-\alpha)^3 + \frac{Q^{(3)}(\alpha)}{3!} (\beta-\alpha)^4 \\ &= f(\alpha) + \frac{f^\prime(\alpha)}{1!} (\beta - \alpha) + \frac{f^{\prime\prime}(\alpha)}{2!} (\beta - \alpha)^2 + \frac{f^{(3)}(\alpha)}{3!}(\beta-\alpha)^3 + \frac{ \frac{1}{4} \left( f^{(4)}(\alpha) + (\beta - \alpha) Q^{(4)}(\alpha) \right) }{3!} (\beta-\alpha)^4 \qquad \mbox{ [ using (5) ] } \\ &= f(\alpha) + \frac{f^\prime(\alpha)}{1!} (\beta - \alpha) + \frac{f^{\prime\prime}(\alpha)}{2!} (\beta - \alpha)^2 + \frac{f^{(3)}(\alpha)}{3!}(\beta-\alpha)^3 + \frac{f^{(4)}(\alpha)}{4!} (\beta - \alpha)^4 + \frac{Q^{(4)}(\alpha)}{4!} (\beta-\alpha)^5 \\ &= \cdots \\ &= f(\alpha) + \frac{f^\prime(\alpha)}{1!} (\beta - \alpha) + \frac{f^{\prime\prime}(\alpha)}{2!} (\beta - \alpha)^2 + \frac{f^{(3)}(\alpha)}{3!}(\beta-\alpha)^3 + \frac{f^{(4)}(\alpha)}{4!} (\beta - \alpha)^4 + \cdots + \frac{f^{(n-1)}(\alpha)}{(n-1)!} (\beta-\alpha)^{n-1} + \frac{Q^{(n-1)}(\alpha)}{(n-1)!} (\beta - \alpha)^n \\ &= P(\beta) + \frac{Q^{(n-1)}(\alpha)}{(n-1)!} (\beta - \alpha)^n, \end{align} $$ as required.

Is this proof correct? If so, then is it rigorous enough for Rudin as well?

No comments:

Post a Comment