Saturday, September 18, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Equality in Axler 6.A.12

Posted: 18 Sep 2021 08:06 PM PDT

Axler exercise 6.A.12 states:

Prove that $$(x_1 + \ldots + x_n)^2 \leq n (x_1^2 + \ldots + x_n)^2$$ for all positive integers $n$ and all real numbers $x_1, \ldots, x_n$.

I have no problem proving this using the Cauchy-Schwarz inequality to $(1, \ldots, 1)$ and $(x_1, \ldots, x_n)$. My question is about when equality holds. Particularly, I am looking for a condition for which equality holds if and only if that condition is satisfied.

I'm able to prove that if these two vectors are linearly dependent, then equality holds. I'm having trouble proving that if equality holds, these two vectors are linearly dependent. If equality holds, I have: $$(x_1 + \ldots + x_n)^2 = n (x_1^2 + \ldots + x_n)^2.$$ The only thing I can do is take square roots: $$|x_1 + \ldots + x_n| = \sqrt{n} \sqrt{x_1^2 + \ldots + x_n^2}.$$ If I had equality, then $(1, \ldots, 1) = t (x_1, \ldots, x_n)$ so $tx_i = 1$ for all $i$ and every entry of the $(x_1, \ldots, x_n)$ would have to be the same. I don't know how to derive that from the above conditions.

Is it possible that this implication is not double-sided? Otherwise, how can I prove that equality implies this condition?

How can I derive $~\text{opposite}\cdot\sin^{}\left(\theta_{}\right)+\text{adjacent}\cdot\cos^{}\left(\theta_{}\right)=\tan^{}\left(\theta_{}\right)$

Posted: 18 Sep 2021 08:08 PM PDT

Given the below equation .

$$ b \cos^{}\left(\theta_{} \right) = a \cdot \sin^{}\left(\theta_{} \right) $$

I have to derive the below equation .

$$ b \sin^{}\left(\theta_{} \right) + a \cdot \cos^{}\left(\theta_{} \right) = \sqrt{ a ^2 + b^2 } $$

My tries are as below .

$$ \frac{ b }{ a } = \frac{ \sin^{}\left(\theta_{} \right) }{ \cos^{}\left(\theta_{} \right) } $$

$$ \frac{ b }{ a } = \tan^{}\left( \theta_{} \right) $$

$$ \text{adjacent}= a $$

$$ \text{opposite}= b $$

$$ \text{hypotenuse}=\sqrt{ a^2+ b ^2 } $$

$$ b \sin^{}\left(\theta_{} \right) + a \cdot \cos^{}\left(\theta_{} \right) $$

$$ = b \left( \frac{ b \cos^{}\left(\theta_{} \right) }{ a } \right) +a \left( \frac{ a \sin^{}\left(\theta_{} \right) }{ b } \right) $$

$$ = \frac{ b^2\cos^{}\left(\theta_{} \right) }{ a } + \frac{ a ^2 \sin^{}\left(\theta_{} \right) }{ b } $$

I've been got stucked from here .

Dimension of variety of a finitely generated ideal

Posted: 18 Sep 2021 08:00 PM PDT

The following is a theorem from the book "ideals,varieties and algorithms":

Let k be an algebraically closed field and let I be a homogeneous ideal in $k[x_0, . . . , x_n]$. If dim V(I ) $> 0$ and f is any nonconstant homogeneous polynomial, then dim $V(I ) \geq$ dim $V(I + (f)) \geq$ dim $V(I ) − 1$. This implies that if $f_1, . . . , f_r$ are nonconstant homogeneous polynomials in $k[x_0, . . . , x_n]$, then dim V($f_1, . . . , f_r) \geq n − r$.

Then it's stated in the book that the above may not hold for affine varieties and a counterexample is:

dim $V(xy, yz, z-1) = 0 = $ dim $V(xy, yz) -2$.

They also mention that it's possible to formulate a version of the above theorem for affine varieties, but that's beyond the scope of the book. That's exactly what I want to know.

To be precise, what is a sufficient condition under which dim V($f_1, . . . , f_r) \geq n − r$ when $f_1,...,f_r$ are nonconstant (possibly non-homogeneous) polynomials in $k[x_1,...,x_n]$. I suspect that a sufficient condition might be that the constant terms in $f_1,...,f_r$ are all $0$'s. Is that true?

Help in understanding a proof used in numerical linear algebra

Posted: 18 Sep 2021 07:51 PM PDT

I am studying numerical linear algebra with the help of Trefethen and Bau's book. I have come across a proof I do not fully understand. I will attempt to bring the statement and proof as they appear in the book. I apologize in advance for the long post, but I wish to bring the reasoning as it appears in the book.

Let $A$ be an $m \times m$ real, symmetric matrix. Suppose we have $n$ linearly independent vectors $v_1^{(0)},v_2^{(0)},\dots,v_n^{(0)}$, and define $V^{(0)}$ to be the $m \times n$ matrix whose columns are $v_1^{(0)},v_2^{(0)},\dots,v_n^{(0)}$ in this order. Now we define the matrix $V^{(k)}=A^kV^{(0)}$ (the result after $k$ applications of $A$) and we extract a reduced QR factorization of this matrix $\hat{Q}^{(k)}\hat{R}^{(k)}=V^{(k)}$ where $\hat{Q}^{(k)}$ is an $m \times n$ matrix whose columns form an orthonormal basis for the column space of $V^{(k)}$. We make the following assumption on the eigenvalues of $A$: $|\lambda_1|>|\lambda_2|>\dots>|\lambda_n|>|\lambda_{n+1}| \geq |\lambda_{n+2}| \geq \dots \geq |\lambda_{m}|$. Next, we define the matrix $Q$ whose columns are the normalized eigenvectors of $A$ in the order corresponding to the ordering of the eigenvalues in the assumption, and we take $\hat{Q}$ to be the $m \times n$ matrix whose columns are the first $n$ eigenvectors of $A$, $q_1,\dots,q_n$. We now note that $\hat{Q}$ and $\hat{Q}^{(k)}$ are entirely different matrices despite the similar notation. Now, we assume that all the leading principal minors of $\hat{Q}^TV^{(0)}$ are nonsingular.

Now, we have the following theorem

Assume the notation and assumptions used above. Then as $k \to \infty$, the columns of $\hat{Q}^{(k)}$ converge linearly to the eigenvectors of $A$, so that $$|q_j^{(k)}- \pm q_j| = O(C^k)$$ for each $j$ with $1 \leq j \leq n$ where $C < 1$ is the constant $\max_{1 \leq k \leq n} |\lambda_{k+1}/|\lambda_k|$. The $\pm$ sign means that at each step $k$ one or the other choice of sign is to be taken.

Proof: Extend $\hat{Q}$ to a full $m \times m$ orthogonal matrix $Q$ of eigenvectors of $A$ (in the order matching the ordering of the eigenvalues above), and let $\Lambda$ be the corresponding diagonal matrix of eigenvalues so that $A=Q\Lambda Q^T$. Just as $\hat{Q}$ is the leading $m \times n$ section of $Q$, define $\tilde{\Lambda}$ (still diagonal) to be the leading $n \times n$ section of $\Lambda$. Then we have $$V^{(k)}=A^kV^{(0)}=Q\Lambda^k Q^TV^{(0)}=\hat{Q}\hat{\Lambda}^k\hat{Q}^TV^{(0)}+O(|\lambda_{n+1}|^k)$$ as $k \to \infty$. Due to the assumption on $\hat{Q}^TV^{(0)}$, then this matrix itself is nonsingular in particular. Thus, we can multiply the term $O(|\lambda_{n+1}|^k)$ on the right by $\left(\hat{Q}^TV^{(0)}\right)^{-1}\hat{Q}^TV^{(0)}$ to transform our equation to $$V^{(k)}=(\hat{Q}\hat{\Lambda}^k+O(|\lambda_{n+1}|^k))Q^TV^{(0)}.$$ Since $\hat{Q}^TV^{(0)}$ is nonsingular, the column space of this matrix is the same as the column space of $$\hat{Q}\hat{\Lambda}^k+O(|\lambda_{n+1}|^k).$$ From the form of $\hat{Q}\hat{\Lambda}^k$ and the assumption on the ordering of the eigenvalues, it is clear that this column space converges linearly to that of $\hat{Q}$. This convergence can be quantified, for example, by defining angles between subspaces; we omit the details. Now, in fact, we have assumed that not only is $\hat{Q}^TV^{(0)}$ nonsingular but so are all of its leading principal minors. It follows that the argument above also applies to leading subsets of the columns of $V^{(k)}$ and $\hat{Q}$: the first column, the first and second columns, the first second and third columns, and so on. In each case, we conclude that the space spanned by indicated columns of $V^{(k)}$ converges linearly to the space spanned by the corresponding columns of $\hat{Q}$. From the convergence of the successive column spaces, together with the definition of the $QR$ factorization, the convergence result follows.

Here is where I get confused:

  1. How did the authors obtain $$V^{(k)}=A^kV^{(0)}=Q\Lambda^k Q^TV^{(0)}=\hat{Q}\hat{\Lambda}^k\hat{Q}^TV^{(0)}+O(|\lambda_{n+1}|^k)$$, and what is the exact meaning of the big O notation here for matrices?
  1. I do not understand the statement in boldface. How does the form of $\hat{Q}\hat{\Lambda}^k$ and the assumption on the ordering of the eigenvalues lead to linear convergence?

I would appreciate any and all help in understanding and clarifying these points. I really want to know how the authors conclude linear convergence in the statement in boldface. I thank all helpers.

Ask sample of size and dimension of variables?

Posted: 18 Sep 2021 07:41 PM PDT

I have a question from my textbook. It said Let $\mathbf{X_1,X_2,...,X_{20}}$ be a random sample of size n=20 from an $N_6(\mathbf{\mu, \Sigma})$ population. Then the distribution of $\mathbf{\bar{X}}$ is $N_6(\mathbf{\mu, 1/20*\Sigma})$

My question: I am confused by 6 and 20 here. What is 6? What is 20? Also, $\mathbf{X_1,X_2,...,X_{20}}$ here we have 20 X's. Also, the sample of size n=20. Is this coincidence?

how is it possible that binomial is not Gosper summable?

Posted: 18 Sep 2021 07:33 PM PDT

I am trying to understand the book AeqB which has many much more complicated expressions which are Gosper-summable but then on p102 he states $\binom{n}{k}$ for fixed n as a function of k is not Gosper-summable. HOw can this possibly be ? Can someone show why on earth it is not Gosper-summable?

Approximating a mixture of gaussians with a single density

Posted: 18 Sep 2021 07:30 PM PDT

Let $f(0,\sigma^{2})$ denote the density function of a Gaussian RV with mean $\mu$ and variance $\sigma^{2}$.

I have a function as given below:

$ \hspace{3cm}g(n,\sigma^{2})= \sum_{k=0}^{n} {n \choose k}(1-p)^{n-k}p^{k}f(0,k\sigma^{2}+\beta^{2})$.

Clearly $g()$ is a Gaussian mixture distribution with binomial coefficients as weights. Here, $0<p<1$ and $\beta^{2}>0$.

I am interested in approximating this mixture distribution with a single density function which is not a mixture of pdfs.

  • What single pdf would be a decent approximation? The approximation error can be evaluated based on the "distance" or "divergence" between the 2 distributions.

  • If I don't require one global solution for the entire range of $g$, is there a strategy to find the best approximation for a given point in the range of $g$?

  • If I only require a functional approximation (i.e., the approximation need not be a density), is there a better solution to this problem?

Any help/references are appreciated.

Is the Hausdorff condition necessary for this problem?

Posted: 18 Sep 2021 07:30 PM PDT

Let $X,Y$ be Hausdorff topological spaces, $f: X \to Y$ be a continuous function and $(F_n)_{n\geq 1}$ be a decreasing family of compact subsets of $X$. Prove that $$f\left(\bigcap_{n\geq 1}F_n\right) = \bigcap_{n \geq 1}f(F_n)$$

The "$\subseteq$" inclusion always holds. For the reverse, I considered a $y \in \bigcap_{n\geq 1} f(F_n)$ and noticed that the family $\langle F_n \cap f^{-1}(\{y\})\rangle$ is a non decreasing family of closed subsets of the compact (sub)space $F_1$ so it must have non-empty intersection. This proves the required inclusion. My question is whether the Haussdorf condition for $Y$ can be replaced by just $T_1$. Is this correct or is my solution wrong?

Thanks.

From a group of three biologists, two physicists and one mathematician, a committee of two people is to be randomly selected.

Posted: 18 Sep 2021 08:09 PM PDT

From a group of three biologists, two physicists and one mathematician, a committee of two people is to be randomly selected. Denote by $X$ the random variable representing the number of biologists and by $Y$ the random variable representing the number of physicists on the committee. Calculate: $f_{X, Y}$, $f_{X}$, $f_{Y}$.

Attempt

$$f_{X}=\frac{\binom{3}{1} \binom{2}{1}}{\binom{5}{2}}+\frac{\binom{3}{1} \cdot 1}{\binom{5}{2}}$$

$$f_{Y}=\frac{\binom{2}{1} \binom{3}{1}}{\binom{5}{2}}+\frac{\binom{2}{1} \cdot 1}{\binom{5}{2}}$$

Then, $$f_{X, Y}=f_{X}+f_{Y}$$

Am I understanding the problem?

Digit DP - Find the $N-th$ number that (Product of its digit) is divided by (Sum of its digit)

Posted: 18 Sep 2021 07:19 PM PDT

Relative question

Digit DP - Find the $N-th$ number that (Product of its digit) is divided by (Sum of its digit) (the $1-th$ number is $1$). It's sure that this number less than $2^{63}$

I'll call that number PDS. I think, I can use Binary Search, but I need to count the number of PDS not greater than N. Using Digit DP as the link take $O(l^4)$ time complexity ($l$ is the number of digit of $N$) which will not pass the time limit $1$ second.

Or we can use Digit DP for $l$ from $1$, and we will know the number of digit of the answer. But it is quite hard.

What should I do now. Is there any better approach?

Strong differentiability and the inverse function theorem in Banach spaces

Posted: 18 Sep 2021 07:11 PM PDT

I am trying to prove the strong differentiability version of the Inverse Function Theorem for Banach spaces, but I am not sure if it is true. I am interested in this because it is a kind of punctual version of the theorem. So my main question is:

Is the strong differentiability version of the Inverse Function Theorem true for Banach spaces?

Here is the definition of strong differentiability.

Definition Let $E$ and $E'$ be normed linear spaces, $A \subseteq E$ an open set, $a \in A$ a point and $f: A \to E'$ a function. We say $f$ is strong differentiable at $a$ when there is a continuous linear map $D: E \to E'$ such that $$\lim_{(x,x') \to (a,a)} \frac{f(a+x')-f(a+x) - D(x'-x)}{x'-x} = 0.$$

In this case, $f$ is differentiable at $a$ and $D = Df|_a$, that is, the linear map $D$ is the differential of $f$ at $a$.

Considerer the remainder function $r_a(v) = f(a+v) - f(a) - Df|_a(v)$. In finite dimensional spaces, strong differentiability at $a$ can be shown to be equivalent to this: for every $\varepsilon > 0$, there is a neighborhood of the origin in which the function $r_a$ is Lipschitz with Lipscitz constant $\varepsilon$. I believe this is also true for infinite dimensions, but have not proved it yet.

Inverse Function Theorem (strongly differentiable) Let $E$ and $E'$ be Banach spaces, $A \subseteq E$ an open set, $a \in A$ a point and $f: A \to E'$ a function which is strongly differentiable at $a$ and such that $Df|_a:E \to E'$ is a linear isomorphism. In this case, there is an open neighborhood $V \subseteq A$ of $a$ such that $f|_V: V \to f(V)$ is a homeomorphism, the inverse function $f^{-1}: f(V) \to V$ is strongly differentiable at $f(a)$ and its differentiable at $f(a)$ is $Df^{-1}|_{f(a)} = (Df|_{a})^{-1}$.

Is Possion formula valid for the weak solution of Laplacian?

Posted: 18 Sep 2021 07:11 PM PDT

In the book "Regularity Theory for Elliptic PDE", here is a theorem as follows

Theorem(Harnack's inequality). Assume $ u\in H^1(B_1) $ is a non-negative, is the weak solution for the laplace equation $ \Delta{u}=0\text{ in }B_1 $ where $ B_1 $ is the ball with center $ 0 $ and radius $ 1 $. Then the infimum and the supermum of $ u $ are comparable in $ B_{1/2} $. That is, there exists $ C $ depending only on $ n $, such that \begin{eqnarray} \sup_{B_{1/2}}u\leq C\inf_{B_{1/2}}u. \end{eqnarray}

The proof of this theorem in this book is by using Possion kernel representation as follows \begin{eqnarray} u(x)=C_n\int_{\partial B_{1}}\frac{1-|x|^2}{|x-y|^n}u(y)dS(y),(*) \end{eqnarray} where $ C_n $ is a constant depending only on $ n $. I know that the proof of the final result is trivial by this representation formula. However I doubt the correctness of this Possion kernel representation. By using the Weyl Lemma, I see that $ u\in C^{\infty}(B_1) $, but on $ \partial B_1 $, I cannot get that $ u(x)|_{\partial B_1} $ is continuous. Therefore, my question is the formula (*) correct for the weak solution $ u $ and how to derive this formula? Moreover, I know that if we use the Possion integration on $ \partial B_{3/4} $, we can also have the Harnack inequality, but I still want to know if we can use the Possion integration on $ \partial B_{1} $

How to get the derivative f ' by definition?

Posted: 18 Sep 2021 07:17 PM PDT

Let the function :

$$ f(x) = \frac{1-x}{1+x} $$ with its domain in the real numbers except $-1$.

Determine with the help of $$ f'(x) \equiv \lim _{h\to 0}\frac{f(x+h) - f(x)}{h} $$

the derivative $f'$ of function $f$.

If $E$ has positive measure, is it necessarily true that $x = \frac 12(y + z)$ for distinct $y, z\in E$

Posted: 18 Sep 2021 07:23 PM PDT

Working in $\Bbb R$ and assuming that $E$ has positive Lebesgue measure, does there necessarily exist $x \in E$ s.t. $x = \frac 12(y + z)$ for distinct $y, z \in E$? I think a sufficient condition that could be provable is the following: if $f(x) = m(\frac E2 \cap (x - \frac E2)) > 0$ for some $x \in E$, then there are two distinct $y_1, y_2 \in E/2$ s.t. $y_1 +z_1 = x$ and $y_2 + z_2 = x$ for $z_1, z_2 \in E/2$. Because the $y_i$'s are distinct, so are the $z_i$'s and we are done. It remains to show that $f(x) \neq 0$ on all of $E$. Noting that $$\chi_{x - E/2}(y) = 1 \Leftrightarrow x = y +z/2, z \in E \Leftrightarrow \chi_{y+E/2}(x)$$ we express $f(x) = \int \chi_{E/2}(y) \chi_{x - E/2}(y)\text d y= \int \chi_{E/2}(y) \chi_{y + E/2}(x)\text d y$ and apply Fubini's Theorem to compute \begin{align} \int_E f(x) \text d x &= \int_E \int_\Bbb R \chi_{E/2}(y)\chi_{y+E/2}(x)\text d y \text d x \\ &=\int_\Bbb R\chi_{E/2}(y)\int_\Bbb R \chi_{E \cap (y+E/2)}(x)\text dx \text d y \\ &= \int_{E/2}m(E \cap (y+E/2))\text dy \end{align} It feels to me like this last integral should be bounded below by $m(E)$, but I have no idea how to show this. Am I on the right track or is there a simpler solution?

power of point with circle

Posted: 18 Sep 2021 08:00 PM PDT

Let $ABC$ be a acute triangle and three altitude foot from $A,B,C$ are $D,E,F$ respectively. $H$ is the orthorcenter of triangle $ABC$. O is center of circumcircle of triangle $BHC$. Let $N=DF\cap BH, M=DE\cap CH$. Prove that $AO\perp MN$.

enter image description here

Here are what i did:

  • Let $K=EF\cap MN$ and $L=AD\cap EF$, we have $(K,L,F,E)=-1$. Let $K'=EF\cap BC$ then $(K',L,E,F)=1$. Therefore $K\equiv K'$.
  • O is a symmetrical to the center of $(ABC)$ with respect to the straight line $BC$.

I'm stuck here. Somebody can help me!

Determine the accumulation points of a set of complex numbers

Posted: 18 Sep 2021 07:13 PM PDT

I am reading Complex Variables and Applications by Brown and Churchill. On page 35, exercise 7(b) asks the reader to determine the accumulation points of the following set:

$$S=\left\{\frac{i^n}{n}:n\in\mathbb{N}\right\}$$

After calculating a few elements of $S$ for small values of $n$ by hand and plotting the points in the complex plane, it is obvious to me that $z=0$ is the only accumulation point of $S$. This is confirmed by the answer given in the textbook, and I was easily able to prove that $z=0$ is indeed an accumulation point of $S$. However, the exercise asks the reader to "determine" the accumulation points of $S$, which seems to imply that I also need to prove that there are no other accumulation points of $S$. However, I am struggling to prove that if $z\neq0$, then $z$ is not an accumulation point of $S$. Here is what I have so far:

Let $z\in\mathbb{C}\setminus\{0\}$ and let $z_1\in S$

Then there exists $n\in\mathbb{N}$ such that $z_1=\frac{i^n}{n}$

Let $\varepsilon=\frac{1}{n+1}$

Then $|z_1-z|\geq||z_1|-|z||=\left|\left|\frac{i^n}{n}\right|-|z|\right|=\left|\frac{1}{n}-|z|\right|$

I know that I need to show somehow that $\left|\frac{1}{n}-|z|\right|\geq\varepsilon$, but I am not sure how. I chose $\varepsilon=\frac{1}{n+1}$ based on the fact that:

$\left|\frac{1}{n}-|z|\right|\geq\varepsilon\implies\frac{1}{n}-|z|\geq\varepsilon$ or $\frac{1}{n}-|z|\leq-\varepsilon$

$\frac{1}{n}-|z|\geq\varepsilon\implies\frac{1}{n}\geq\varepsilon+|z|\implies\frac{1}{n}>\varepsilon$ since $|z|>0$, which is true for all $n\in\mathbb{N}$ if $\varepsilon=\frac{1}{n+1}$

However, substituting $\varepsilon=\frac{1}{n+1}$ into the second inequality above yields:

$\frac{1}{n}-|z|\leq-\varepsilon\implies\frac{1}{n}-|z|\leq-\frac{1}{n+1}\implies\frac{1}{n}+\frac{1}{n+1}\leq|z|$

which does not appear to be true for all $n\in\mathbb{N}$ since $|z|$ can be arbitrarily small. However, I am not sure how to choose $\varepsilon$ such that both inequalities are satisfied. $\varepsilon=-\frac{1}{n}$ would work, but $\varepsilon$ must be positive. Will my choice of $\varepsilon=\frac{1}{n+1}$ work?

I would appreciate any hints that might help me complete the proof.

Coin Flip Probability Question

Posted: 18 Sep 2021 07:14 PM PDT

I have 8 coins, you have 10 coins, we toss the coins, the prob of my heads more than yours.

I know how to solve the problem for n coins and n+1 coins, the answer would be 1/2 due to symmetry. But how to solve this one?

Let $M\in \text{SO}(3,\mathbb{R})$, prove that $\det(M-I_3)=0$.

Posted: 18 Sep 2021 07:24 PM PDT

Let $M\in \text{SO}(3,\mathbb{R})$, prove that $\det(M-I_3)=0$.

My attempt:

$$ \begin{align} \det(M-I_3)&=\det(M-M^TM)\\&=\det((I_3-M^T)M)\\&=\underbrace{\det(M)}_{=1}\det(I_3-M^T) \end{align} $$

Hence $$ \begin{align} \det(M-I_3)&=\det(I_3-M^T)\\&=\det((I_3-M)^T)\\&=\det(I_3-M)\\&=\det(-(M-I_3))\\&=\underbrace{(-1)^3}_{=-1}\det(M-I_3), \end{align} $$ and thus $\det(M-I_3)=0.$

Is this proof correct or did I miss out on something?

Covering map implies biholomorphism?

Posted: 18 Sep 2021 07:51 PM PDT

Suppose $f: X \to X'$ is a (holomorphic) covering map between hyperbolic surfaces (that is, their universal covering is the unit disk $\mathbb{D}$). Letting $p: \mathbb{D} \to X$ and $p': \mathbb{D} \to X'$ be the universal coverings, one can obtain a lifting $\tilde{f}: \mathbb{D} \to \mathbb{D}$ after choosing some points. Then apparently this lifting is biholomorphic. I see why $\tilde{f}$ is a holomorphic map, since it is locally a composition of holomorphic maps, but I don't quite see why it must be a bijection. So my question is exactly that, is $\tilde{f}$ a bijection?

Edit: I added the section in which this problem appears, as mentioned in the comments. enter image description here enter image description here enter image description here enter image description here

If the trace of the matrices $A^k$ are equal to the size of $A$ why the spectrum of $A$ is $\{1\}$

Posted: 18 Sep 2021 07:31 PM PDT

Let $A$ a matrix in $\mathcal{M}_n(\mathbb C)$ such that $\operatorname{Tr}(A^k)=n$ for all $k\in\{1,\ldots,n\}$. I have to prove that $\operatorname{Sp}(A)=\{1\}$. I denoted $\lambda_1,\ldots,\lambda_n$ the eigenvalues of $A$ with multiplicity so the hypothesis leads to the system of equations: $\lambda_1^k+\cdots+\lambda_n^k=n$ for all $k$. I can see that $\lambda_1=\cdots=\lambda_n=1$ is a trivial solution of the system but it's not obvious for me why it would be the unique solution. Any suggestion?

Let $f:[0,2\pi]\rightarrow[-1,1]$ satisfy $f(\theta)=\sum_{r=0}^n(a_r\sin(r\theta)+b_r\cos(r\theta)$ for $a_h,b_i\in R$. If $|f(x)|=1$...

Posted: 18 Sep 2021 07:24 PM PDT

Let $f:[0,2\pi]\rightarrow[-1,1]$ satisfy $f(\theta)=\sum_{r=0}^n(a_r\sin(r\theta)+b_r\cos(r\theta)$ for $a_h,b_i\in R$. If $|f(x)|=1$ for exactly $2n$ distinct values in $[0,2\pi)$, then prove that the number of distinct solutions of $(f"(x))^2+f'(x)f'''(x)=0$ lie in $[4n-2,4n]$

I know that $-\sqrt{a^2+b^2}+c\leq a\cos(\theta)+b\sin(\theta)+c\leq \sqrt{a^2+b^2}+c$

But this same formula can't be extended to the entire series since it's not necessary that $\sin(2\theta)$ and $\sin(\theta)$ give the minimum value for the same value of $\theta$

If we prove that $1$ is the maximum value of $f(\theta)$, we can prove that $f'(x)=0$ has $2n$ roots. This means $f"(x)$ has $2n-1$ roots (by Rolle's theorem)

Since, $\sin 0+\cos 0=1$, the subsequent terms will cancel each other out if $f(x)=1$

If $f(x)=-1$, then the sum of the subsequent terms will be $-2$

Edit: Corrected mistakes

General form of this integral functional equation?

Posted: 18 Sep 2021 07:39 PM PDT

So I was doing some physics and I can recover a variant of the Maxwell distribution (the answer is a gaussian)

$$P(k)dk = \int_{-\infty}^\infty P(k-x)P(x) dx dk $$

I was trying to extend this to the relativistic case. How would one find the general form of this probability distribution $P$? (Is it some kind of Bessel function?)

$$P(k) dk= \int_{0}^\infty P(\lambda)P(k/ \lambda) d \lambda dk $$

Edit:

Calculations of the first claim: $$P(k) dk = \int_{-\infty}^\infty (a^2/ \pi) e^{-a^2(k-x)^2-a^2x^2} dx dk= (a^2/ \pi) \int_{-\infty}^\infty e^{-a^2 (k^2 - 2x^2 - 2kx)}dx dk = (a^2/ \pi) \int_{-\infty}^\infty e^{-a^2 (2(k/2 +x)^2 +k^2/2)}dx dk$$

Taking $x \to k/2 + x$

$$(a^2/ \pi) \int_{-\infty}^\infty e^{-a^2 (2x^2 +k^2/2)}dx dk= |a| \sqrt{\frac{2}{\pi}} e^{-a^2k^2/2} dk $$

where $k \to \sqrt2 k$

$$\frac{|a|}{\sqrt{\pi}} e^{-a^2k^2} dk = P(k) dk$$

Compute both roots with an alternative method to avoid bad computation

Posted: 18 Sep 2021 07:56 PM PDT

This question is from exercises in the course Numerical Analysis.

I want to compute both roots for $x^2-10^8x+1$ with an alternative method to avoid bad computation. In the task it also gives the clue that you should try multiply the formula for the roots to see what you get, i.e: $$\frac{-b-\sqrt{b^2-4ac}}{2}\cdot\frac{-b+\sqrt{b^2-4ac}}{2}.$$ And and I would appreciate help in understanding how to proceed to do it this way.

My attempts: From this clue I came to think that for two roots $\alpha$ and $\beta$ for the quadratic polynomial $ax^2+bx+c$ it applies that $\alpha\beta=\frac{c}{a}$ and $\alpha+\beta=-\frac{b}{a}$ and from this I got $$ \alpha\beta=1,$$ $$\alpha+\beta=10^8.$$

But this doesn't give a method with less bad computation since when rewriting it one only go back to what we started with, for example $\alpha+\beta=10^8 \Rightarrow \alpha=10^8-\beta$ that gives us $\beta(10^8-\beta)=\beta 10^8 -\beta^2=1$ which equals to $\beta^2-\beta 10^8+1=0$.

And if one multiply the formula for the roots with the values a,b and c we get $$\alpha\beta=\frac{(10^8+\sqrt{10^{16}-4})(10^8-\sqrt{10^{16}-4})}{4}$$ but neither does this provide us with a better method. Am I at all onto something here or have I misunderstood anything?

Thanks in advance!

If the radius of convergence of the series $\sum a_n z^n$ is $R$, what's the radius of convergence of $\sum s_n z^n$, where $s_n$ partial sum of $a_n$

Posted: 18 Sep 2021 08:07 PM PDT

Can we say about that series generally or it depend on the series $\sum a_n z^n$? If yes in what way it depends on it?

What is $\mathcal{R}$?

Posted: 18 Sep 2021 07:56 PM PDT

First of all, I am asking this question entirely out of curiosity. It basically randomly popped out of my mind.

So I am asking for the value of an infinite series.

Let's call it, $\mathcal{R}=\sum_{n=1}^{\infty}\mathcal{R}_n$

Now I will try to explain what are these $\mathcal{R}_i$'s

Let's take a 1-ball of unit length (1-volume).

It's radius will be $r_1=\frac{1}{2}$

So the first term of my series is $\mathcal{R}_1=1$


Now I will turn the length of this line into the circumference of a circle or a $2$-ball.

It's radius will be $r_2$, you can just do a little calculation and

$r_2=\frac{1}{2\pi}$

And the area of this circle would be, $\pi r_2^2=\frac{1}{4\pi}$

This will be $\mathcal{R}_2=\frac{1}{4\pi}$


Now let's turn the area of this circle into the surface area of a sphere or a $3$-ball,

It's radius would be $r_3$

Again, $r_3=\frac{1}{4\pi}$

And the volume of this sphere will be,

$\frac{4}{3}\pi r_3^3=\frac{1}{48\pi^2}$

This will be, $\mathcal{R}_3=\frac{1}{48\pi^2}$


In general we take an $n$-ball whose radius is $r_n$, then we turn the $n$-volume of this ball into the $n$-surface area of an $n+1$-ball whose radius is $r_{n+1}$.

And $\mathcal{R}$ is the sum over the $n$-volumes of these $n$-balls with radius $r_n$.

So we can write,

$\mathcal{R}=\textstyle\displaystyle\sum_{n=1}^{\infty}V_n(r_n)$

From this we can deduce the formula for $r_n$

Agian in general, we take an $n$-ball with radius $r_n$. Then we set it's $n$-volume to equal the $n$-surface area of an $n+1$-ball. And we define the $n+1$-ball's radius to be $r_{n+1}$. From this we can calculate a recurrence relation for $r_n$,

$V_n(r_n)=S_n(r_{n+1})$

$\textstyle\displaystyle{\frac{\pi^\frac{n}{2}}{\Gamma(\frac{n}{2}+1)}r_n^n=\frac{2\sqrt{\pi}\pi^\frac{n}{2}}{\Gamma(\frac{n+1}{2})}r_{n+1}^n}$

$\textstyle\displaystyle{r_{n+1}=r_n\sqrt[n]{\frac{\Gamma(\frac{n+1}{2})}{2\sqrt{\pi}\Gamma(\frac{n+2}{2})}}}$

with the initial condition of $r_1=\frac{1}{2}$

If we keep expanding $r_n$ using the formula then we get,

$\textstyle\displaystyle{r_n=r_1\prod_{k=1}^{n-1}\sqrt[k]{\frac{\Gamma(\frac{k+1}{2})}{2\sqrt{\pi}\Gamma(\frac{k+2}{2})}}}$

$\textstyle\displaystyle{r_n=\frac{1}{2(2\sqrt{\pi})^{H_{n-1}}}\prod_{k=1}^{n-1}\sqrt[k]{\frac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k+2}{2})}}}$

So finally,

$\textstyle\displaystyle{V_n(r_n)=\frac{\pi^\frac{n}{2}}{\Gamma(\frac{n}{2}+1)}r_n^n}$

$\textstyle\displaystyle{\begin{align}\mathcal{R}_n=&\frac{\sqrt{\pi}^{n(1-H_{n-1})}}{2^{n(1+H_{n-1})}\Gamma(\frac{n+1}{2})}\prod_{k=1}^{n}\left(\frac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k+2}{2})}\right)^\frac{n}{k}\end{align}}$

So,

$\textstyle\displaystyle{\mathcal{R}=\sum_{n=1}^{\infty}\frac{\sqrt{\pi}^{n(1-H_{n-1})}}{2^{n(1+H_{n-1})}\Gamma(\frac{n+1}{2})}\prod_{k=1}^{n}\left(\frac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k+2}{2})}\right)^\frac{n}{k}}$

Here $H_0=0$

$\mathcal{R}\approx 1.0817135\dots$

$\underline{\text{My Question}:-}$

Is there a closed form for $\mathcal{R}$? Or maybe we can write $\mathcal{R}$ in terms of some advance function?

Are there some interesting properties of $\mathcal{R}$?

A list of the first few terms of $\mathcal{R}$:-

$\{\mathcal{R}_n\}_{n=1}^{\infty}$

$=\{$$\mathcal{R}_1$,$\mathcal{R}_2$,$\mathcal{R}_3$,$\mathcal{R}_4$,$\mathcal{R}_5$,$\mathcal{R}_6$,$\mathcal{R}_7$,$\mathcal{R}_8$,$\mathcal{R}_9$,$\mathcal{R}_{10}$,$\dots$$\}$

$\textstyle\displaystyle{=\left\{1, \frac{1}{2^2\pi}, \frac{1}{2^43\pi^2}, \frac{1}{2^\frac{23}{3}3^\frac{4}{3}\pi^\frac{10}{3}}, \frac{1}{2^\frac{31}{3}3^\frac{17}{12}5\pi^\frac{14}{3}}, \frac{1}{2^\frac{72}{5}3^\frac{27}{10}5^\frac{6}{5}\pi^\frac{31}{5}}, \frac{1}{2^\frac{163}{10}3^\frac{179}{60}5^\frac{37}{30}7\pi^\frac{116}{15}}, \cdots\right\}}$

You can yourself find more terms by just plugging the formula in Wolfram alpha.

Notice the terms at the denominator have a very weird property, they are all prime numbers to the power of some rational numbers which is really weird. The more peculiar part is that they are not random prime numbers, but the first $n$ prime numbers.

So there will definitely be some involvement of prime numbers in the evaluation of $\mathcal{R}$.

Observing some patterns:-

Notice that every $n^\text{th}$ $2$ terms namely the ${2n-1}^\text{st}$ and ${2n}^\text{th}$ terms have the first $n$ primes in their denominator.

Also notice that when the $n^\text{th}$ prime first appears at the ${2n-1}^\text{st}$ term it is being raised to the first power and at the ${2n}^\text{th}$ term it is being raised to the power of $\frac{p_n+1}{p_n}$. (If we say that the $n^\text{th}$ prime is $p_n$).

I can't prove these observations, but if they are true to infinity then we can write-

$\textstyle\displaystyle{\mathcal{R}=\sum_{n=1}^{\infty}\left(\frac{1}{p_n\pi^{q_{2n-1}}\prod_{k=1}^{n-1}p_k^{r_{k,2n-1}}}+\frac{1}{p_n^{\frac{p_n+1}{p_n}}\pi^{q_{2n}}\prod_{k=1}^{n-1}p_k^{r_{k,2n}}}\right)}$

For some rational sequence $q_n$ and $r_{m,n}$

It seems like $\forall n\in\mathbb{N}, q_{n+1}>q_n$

And the greatest common divisor of the denominators of $r_{m,n}$ and $q_n$ is not $1$, namely

If $\textstyle\displaystyle{r_{m,n}=\frac{s_{m,n}}{t_{m,n}}}$ and $\textstyle\displaystyle{q_n=\frac{a_n}{b_n}}$ such that $\operatorname{gcd}(s_{m,n},t_{m,n},)=\operatorname{gcd}(a_n,b_n)=1$

Then, $\operatorname{gcd}(t_{1,n},\dots,t_{m,n},b_n)\neq 1$ for $n\gt 3$

I don't know if my observation are true to infinity or not, do they even simplify $\mathcal{R}$, I don't know. But it increases my hope for getting a closed form for $\mathcal{R}$.

If I am able to spot more patterns then I will add it to the post.

And please tell me if there are any mistakes.

Extensions with the same degree are $K-$isomorphic?

Posted: 18 Sep 2021 08:00 PM PDT

Suppose we have two simple algebraic extensions $K(a)$, $K(b)$ over a field $K$

(I am interested in case $\mathrm{char}(K)=0$ but I guess there's no difference)

If they have the same (finite) degree over $K$, then they are always $K-$isomorphic?

Question in Ch-12 Apostol's Number theory (Vol1)

Posted: 18 Sep 2021 07:35 PM PDT

I am trying some exercises from Apostol's Introduction to Analytic number theory and I could not solve this particular problem (number 16) of textbook and need help.

enter image description here

I am sorry, I wouldn't be able to provide anything as attempt as I have no ideas on which result to use.

For background, I am taking number theory course this sem where bernaulli polynomials were taught and also read them from Apostol.

Kindly shed some light on this!!

Exercise 2-20 in Fulton's curves book

Posted: 18 Sep 2021 07:52 PM PDT

I read the following in Fulton's book. Could you help me to solve it? Thanks!

Example: $V = V(XW - YZ) \in A^4(k)$. $\Gamma(V) = k[X, Y, Z, W]/(XW-YZ)$. Let $\overline{X}, \overline{Y}, \overline{Z}, \overline{W}$ be the residue of $X, Y, Z, W$ in $\Gamma{V}$. Then $\overline{X}/\overline{Y} = \overline{Z}/\overline{W} = f \in k(V)$ is defined at $P = (x, y, z, w) \in V$ if $y \neq 0 $ or $w \neq 0$.

Exercise 2-20: Show that it is impossible to write $f = a/b$, where $a, b \in \Gamma(V)$, and $b(P) \neq 0$ for every $P$ where $f$ is defined. Show that the pole set of $f$ is exactly $\{(x, y, z, w) \mid y = w =0\}$.

find a length of a triangle having two known sides and a median line

Posted: 18 Sep 2021 08:02 PM PDT

Can anyone help me solve this problem in a simple way? I used area of ACD+ABD=ABC to solve the problem, but it is quite complicated. Thank you very much!

Problem: Two sides of a triangle have lengths 25 and 20, and the median to the third side has length 19.5. Find the length of the third side.

enter image description here

No comments:

Post a Comment