Sunday, January 16, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Prove $\exists C>0$ such that $ \|a_1 x_1+... +a_nx_n\|\ge C(|a_1|+...+|a_n|)$

Posted: 16 Jan 2022 12:15 AM PST

$(X, \|•\|) $ be a normed space. $\{x_1, x_2,...., x_n\}$ linearly independent set in $X$.

Then, $\exists C>0$ such that

\begin{align}\|a_1 x_1+... +a_nx_n\|&\ge C(|a_1|+...+|a_n|)\end{align}

Where, $a_i$ 's are scalar.

My attempt:

To avoid triviality, assume $a_j\neq 0$ for some $j\in \Bbb{N_n}$

Suppose no such $C>0$ exists which satisfy the above inequality.

Then, $\|\sum_{j=1}^{n} a_jx_j\|< C\sum_{i=1}^{n} |a_j|$

$\|\sum_{j=1}^{n} (\frac{a_j}{ \sum_{i=1}^{n} |a_j|} ) x_j\|< C$

Then, $\|\sum_{j=1}^{n} (\frac{a_j}{ \sum_{i=1}^{n} |a_j|} ) x_j\|=0$

$\implies \sum_{j=1}^{n} (\frac{a_j}{ \sum_{i=1}^{n} |a_j|} ) x_j=0$

Since, $\frac{a_j}{ \sum_{i=1}^{n} |a_j|}\neq 0 $ for some $j\in \mathbb{N_n}$

Implies $\{x,_1, x_2,..., x_n\}$ is linearly dependent set in $X$ , a contradiction.

Is my proof correct?

proving the inequality which is too long to fit in title.

Posted: 16 Jan 2022 12:06 AM PST

I was proving something in Linear Algebra when I faced this problem:
($a_1$$\times$$a_1$ + $a_2$$\times$$a_2$ + ... + $a_n$ $\times$$a_n$)$\times$($b_1$$\times$$b_1$ + $b_2$$\times$$b_2$ + ... + $b_n$$\times$$b_n$) $\ge$ ($a_1$$\times$$b_1$ + $a_2$$\times$$b_2$ + ... + $a_n$ $\times$$b_n$)$\times$($a_1$$\times$$b_1$ + $a_2$$\times$$b_2$ + ... + $a_n$ $\times$$b_n$) where a and b are real numbers.

I tested it for some values a and b and tests were correct. can someone help me to prove this? or is it even correct?

How to calculate the monotonic sequence limit

Posted: 16 Jan 2022 12:23 AM PST

Using Monotone convergence theorem, we know the sequence ${x_{n}} = (1-\frac{1}{2})*(1-\frac{1}{4})*\cdot\cdot\cdot*(1-\frac{1}{2^{n}})$ has limit, but I can not derive the exactly limit, so how to calculate the limit of $x_{n}$?

Epsilon-delta proof of difficult function limit

Posted: 15 Jan 2022 11:58 PM PST

I wanted to ask whether it is possible to write out the epsilon-delta proof of the following limit: $$\lim_{x\to\infty}\frac{\arctan(3e^x)}{\ln(1-e^x)}$$ Thanks for your help in advance.

What are some mathematical theorems which become unknown if you reject the axiom of choice?

Posted: 15 Jan 2022 11:49 PM PST

I would like to see statements which are known to be true if you accept the axiom of choice, but it is unknown whether the statement is necessarily true under ZF without the axiom of choice. I'd especially like statements that have easy and/or simple proofs assuming choice.

Can somebody send a detailed answer on how to prove it?

Posted: 15 Jan 2022 11:28 PM PST

[![enter image description here][1]][1]

[1]:strong text https://i.stack.imgur.com/xcx2f.png

Prove that if, V is a dimensional vector over space F and if u1,u2,u3....um ∈ V are linerlay independent, then we can find vector um+1,um+2,um+r in V such that {ui|1≤i≤m+r} is a basis of V

Suppose we are given five integers $a,b,c,d,e$. Using theses integers, how we can parametrize the following equation $ x^2=y^2+z^2+p^2+q^2$

Posted: 16 Jan 2022 12:01 AM PST

Suppose we are given five integers $a,b,c,d,e$. Using theses integers, how we can parametrize the following equation $$ x^2=y^2+z^2+p^2+q^2$$ where $x,y,z,p,q$ are all variables.

I tried to parametrize the equation. I tried to think the pythagorean triple idea. But I could not find it. Please give some hint/idea

Conditional Probability with possible unnecessary informatio?

Posted: 15 Jan 2022 11:39 PM PST

Let's suppose event $A$ has $5\%$ chances of happening. The chances that event $A$ is of type $B$ is $70\%$. Event $A^c$ has $5\%$ chances of being of type $B$. Given that event is of type $B$, what is the probability that it is event $A$.

I solved this by using conditional probability formula: $$P(A|B) = \frac{P(A \cap B)}{P(A \cap B)+P(A^c \cap B)} = \frac{\frac{70}{100}}{\frac{70}{100}+\frac{5}{100}}= \frac{14}{15}$$

Am I missing something or why are we given $P(A)=5\%$ for this problem?

Find the characteristic polynomial of the matrix $M$ in terms of characteristic polynomial of $N$.

Posted: 15 Jan 2022 11:17 PM PST

Find the characteristic polynomial of the matrix $M$ in terms of characteristic polynomial of $N$.

$M=\begin{pmatrix}N+V & U\\U^T& I\end{pmatrix}$ where $V=\left[ \begin{array}{ccccc} n & 0 & \ldots & 0\\ 0 & 0 & \ldots & 0\\ \ldots& \ldots &\ldots &\ldots\\ \ldots& \ldots &\ldots &\ldots\\ \ldots& \ldots &\ldots &\ldots\\ 0 & 0 & \ldots & 0\\ \end{array} \right]_{n\times n} $ and $U=\left[ \begin{array}{ccccc} 1 & 1 & \ldots & 1\\ 0 & 0 & \ldots & 0\\ \ldots& \ldots &\ldots &\ldots\\ \ldots& \ldots &\ldots &\ldots\\ \ldots& \ldots &\ldots &\ldots\\ 0 & 0 & \ldots & 0\\ \end{array} \right]_{n\times n}$.

MY TRY:

I used Schur Complement formula.

We have $\det(xI-M)=\begin{pmatrix}xI-N-V & -U\\-U^T& (x-1)I\end{pmatrix}$

Thus $\det(xI- M)=\det((x-1)I_n)\times \det ((xI-N-V)-U\frac{I}{x-1}U^{T})$

Since $UU^T=V$ we get

$\det \biggl((xI-N-V)-U\frac{I}{x-1}U^{T}\biggr)=\det \biggl((xI-N-V)-\frac{V}{x-1}\biggr)$

But I cant proceed further. How do I express the characteristic polynomial of the matrix $M$ in terms of characteristic polynomial of $N$ from here?

Can someone please help me out?

What is the difference between the below notations?

Posted: 16 Jan 2022 12:03 AM PST

I've often come across two ways of integrating which I think mean the same thing.

  1. $\displaystyle \int f(x) dx$
  2. $\displaystyle \int_{-\infty}^\infty f(x) dx$

Do these two mean the same thing because, both seem to integrating the function over the whole domain. If yes, then why don't we just use one of the two? And if no, then what's the difference?

A problem when trying to compute the derivative of $\frac {e^{-x}x}{x+2}$

Posted: 16 Jan 2022 12:20 AM PST

I have this function, and I need to find intervals in which this function is increasing, and decreasing:

The function is this:

$$f(x) =\frac {e^{-x}x}{x+2}$$

Here's my attempt:

First thing I did is simply rewriting the function in this form: $$f(x) = \frac {x}{e^{-x}(x+2)}$$ second thing I did is finding the domain of f(x) and the derivative of f(x) ( i.e f'(x) ). here's the final result I got by means of quotient rule, and by means of collecting like terms (i.e by collecting $e^x$: $$f'(x) = (e^x \ \text{something positive}) \cdot (-x^2+x+5)$$ I didn't write the denominator, because is squared so it's always positive, and the same thing happens with e^x, therefore I didn't consider the first part. the second part can change the sign of the first derivative so I need it to find increasing/decreasing intervals, but I think the derivative is wrong, so here's a detailed explanation of what I did when computing the derivative: $$e^x(x+2) - x \cdot[e^x (x+2) + e^x]$$ the denominator is always positive so I didn't write it.

$e^x (x+2)$ $\to$ there's a 1. I simply ignored it, because it's the derivative of x.

$x [e ^x (x+2) + e^x\to x$ is $x$, and the other part is the derivative of the denominator (I've used the product rule)

then, I've collected $e^x$, because it's a common term, and I got the final derivative, $$f'(x) = (e^x\ \text{something positive}) \cdot(-x^2+x+5)$$

Consider the non-homogeneous linear recurrence relations $a_n=2a_n−1+2^n$ find all solutions.

Posted: 16 Jan 2022 12:01 AM PST

I can show that $a_n^{(h)}$characteristic equation $r-2=0 \to a_n^{(h)}=\alpha2^n$

But I'm stuck on $a_n^{(p)}$ characteristic equation $A2^n = 2A2^{n-1} + 2^n$

Simplifies to $-A = 2$

$A = 2A + 2$

Establish by mathematical induction that any postage of at least 12c can be obtained using 3c and 7c stamps.

Posted: 15 Jan 2022 11:21 PM PST

Let Pn: n=3x+7y where x,y>=0 Basic step: P(12)is true as 12=3(4)+7(0) Induction step:assume P(n) is true

AND I DIVIDED IT INTO 3 CASES (IN THE IMAGE)

enter image description here

IS THAT WAY EXACTLY CORRECT

Thank you in advance

Logical conclusion from a function being neither increasing nor decreasing.

Posted: 16 Jan 2022 12:08 AM PST

A function $g:[a,b]→\mathbb{R}$ is increasing if $∀x_1<x_2$ in $[a,b]$, $g(x_1)≤g(x_2)$. Similar is the definition of a decreasing function.

So, if a function $f:[a,b]→\mathbb{R}$ satisfies property $p$: $f$ is neither increasing nor decreasing, then:

My conclusion($c_1$): $∃x_1<x_2$ and $x_3<x_4$ in $[a,b]$ which satisfy $f(x_1)<f(x_2)$ and f$(x_3)>f(x_4)$ (or $f(x_1)>f(x_2)$ and $f(x_3)<f(x_4)$). That is, $f$ 'increases' for some $x_1$,$x_2$ and 'decreases' for some $x_3,x_4$(or vice versa).

My textbook's and instructor's conclusion($c_2$): $∃x_1<x_2<x_3$ in $[a,b]$ s.t.$f(x_1)<f(x_2)$ and f$(x_2)>f(x_3)$ (or vice-versa).

But I think that $c_1$ can be logically concluded from $p$, but not $c_2$, i.e., why $x_2$ and $x_3$ in $c_1$ need to be idential? So, my questions are:

1.Which conclusion can be derived immediately from $p$ by logic?

2.Are the $c_1$ and $c_2$ identical? If yes, how can we prove it?

There are a few theorems and proofs that can be easily proved from $c_2$, not $c_1$. But I think that the proofs which use $c_2$ are not complete, but neither I can prove those theorems using $c_1$, nor prove $c_2$ from $c_1$, so I need some help.

Optimization with inner product condition

Posted: 16 Jan 2022 12:21 AM PST


I designed an optimization problem to improve the performance of the neural network learning process on the gradients. I spent hours, but unfortunately it did not work out. Consider V1 and V2 as two M-dimensional vectors. We want to optimize the following relation in order to obtain the desired vector V.

$$ Min_V ||V-V1||^2 + ||V-V2||^2 $$ $$ S.T : V.V1>0 $$ $$ V.V2>0 $$ dot is means inner product.

Any assistance will bring me closer to solving the problem. Thanks

Finding composite elementary matrix by applying operations to identity matrix. With homogeneous coordinates. Why are most right 0 values affected?

Posted: 15 Jan 2022 11:45 PM PST

I trying to find the 3 x 3 elementary matrix that produces the described composite 2D transformation, using homogeneous coordinates.

  1. Translate by (−3,2)
  2. then scale the x-coordinate by 0.2 and the y-coordinate by 1.2

What I did:

I used a 3x3 identity matrix as a basis, applied the translation by adding -3 to the top row, x co-ordinate and +2 to the 2nd row to get translation followed by scaling x and y coordinates:

$\begin{bmatrix}((1-3)*0.2=0.4) & 0&0\\0&((1+2)*1.2=3.6)&0\\0&0&1\end{bmatrix}$

It turns out completely wrong, this is the answer:

enter image description here

My intuition was to isolate the transforms to the first two rows which are the x and y coordinates of the matrix, but clearly, my answer is completely off.

Based on the answer, we seem to add -3 and 2 to the first row, third value and second row, third value respectively before scaling. Why does translation affect those values in a matrix?

Anticanonical line bundle of a threefold in a product of a hypersurface and P1

Posted: 16 Jan 2022 12:04 AM PST

I am reading a paper and there I have the following:

$F:=Y\times \mathbb P^1$ where $Y$ is a sextic hypersurface in the weighted projective space $\mathbb P(1^3,2,3)$.

(1) Let $X\subset F$ be a Fano threefold that is a (1,1)-section of $F$.

The anticanonical line bundle

(2) $\omega_X^{\vee}\cong (\mathcal O_F(2,2)\otimes \mathcal O_F(-X))|_X\cong \mathcal O_F(1,1)|_X$

The conormal sequence for the inclusion $X\hookrightarrow F$ twisted by the anticanonical line bundle.

(3) $0\rightarrow \mathcal O_X\rightarrow\Omega_F^1(1,1)|_X\rightarrow \Omega_X^1(1,1)\rightarrow 0$

In (1), what does "$X$ is a (1,1)-section of $F$" mean?

In (2), How can I prove the last isomorphism? I know that the anticanonical line bundle is $\omega_X^{\vee}\cong \omega_F^{\vee}\otimes I/I^2\cong \mathcal O_F(2,2)\otimes \mathcal O_F(-X))$, since $\omega_F^{\vee}=\pi_1^*\mathcal O(6-(3+2+3))|_Y\otimes \pi_2^*\mathcal O(-2)=\mathcal O_F(2,2)$ according with the notation of the paper. Also I know that $I/I^2=\mathcal O_F(-X)$ but I don't understand how to prove the last isomorphism.

In (3), I know that the the conormal sequence for the inclusion $X\hookrightarrow F$ is $0\rightarrow I/I^2\rightarrow\Omega_F^1|_X\rightarrow \Omega_X^1\rightarrow 0$ but, why after twist it by $\omega_X^{\vee}$ the firts sheaf is $\mathcal O_X$? I can't conclude that from (2).

How to translate "any open interval" and "any closed interval" from English to math symbols.

Posted: 15 Jan 2022 11:43 PM PST

In my intro to real analysis book, I came up with the following lemma (which is easy to prove) to help with an exercise.

For any open interval $K$, if any closed interval $S$ that is a subset of $K$ has the property that for any $x \in S$:$\varphi(x)$, then for any $x\in K$, we must have $\varphi(x) \quad(\dagger)$

For the proof, consider any $x \in K$. Now, consider the set $\{x\}$. This set is a closed set. Therefore, by assumption $\varphi(x)$.

Question: Could someone help me translate the English of $(\dagger)$ into the appropriate math notation?

How does one encode "$K$ is any open interval" and "$S$ is any closed interval"? Are these strictly topological notions that require new notation? Or is there a clever way that uses basic quantifiers and inequalities?

For example, does the following syntax work?

$\forall K \Bigg[\bigg(\Big[\exists a,b \in \mathbb R: \forall x (a \lt x \lt b \rightarrow x \in K) \Big] \text { and } \Big[\forall S\color{blue}{\big(}\color{red}{(}\exists a,b \in K: \forall x (a \leq x \leq b \rightarrow x \in S)\color{red}{)}\rightarrow \forall x \in S: \varphi (x)\color{blue}{\big)}\Big]\bigg) \rightarrow \forall x \in K: \varphi (x) \Bigg] $

Edit:

I think it may be necessary to add a further specification to each of the conjuncts in the overarching antecedent.

For example, in the statement $\exists a,b \in \mathbb R: \forall x (a \lt x \lt b \rightarrow x \in K)$, I have not ensured that $K=\{x \in \mathbb R : a \lt x \lt b\}$. Rather, I have only ensured that $\{x \in \mathbb R : a \lt x \lt b\} \subseteq K$. To guarantee equality, I would have to add the condition that $\forall x ( x \leq a \text{ or } x \geq b \rightarrow x \notin K)$.

A similar extra condition would have to be stipulated for $S$.

If $G_1$ and $G_2$ are isomorphic groups, then show that $I(G_1) \cong I(G_2)$ and $\operatorname{Aut}(G_1) \cong \operatorname{Aut}(G_2)$ [closed]

Posted: 15 Jan 2022 11:40 PM PST

How do I solve this problem? If $G_1$ and $G_2$ are isomorphic groups, then show that $I(G_1) \cong I(G_2)$ and $\operatorname{Aut}(G_1) \cong \operatorname{Aut}(G_2)$. Thanks in advance.

Show that any mapping from the extended complex plane to the Riemann Sphere back to the extended complex plane is a Möbius Transformation

Posted: 16 Jan 2022 12:22 AM PST

Definitions:
Extended Complex Plane $\mathbb{C}^\infty = \mathbb{C} \cup \{\infty\}$.
Stereographic projection: A mapping from a sphere in $\mathbb{R}^3$ to the extended complex plane.
Möbius Transformation: $z \mapsto \frac{a+bz}{c+dz}$ with $ad-bc \neq 0$.

The Question:
Starting with a point in $\mathbb{C}^\infty$, we map it to the sphere via inverse stereographic projection, then we apply a rotation of the sphere and we project it back to the extended complex plane by stereographic projection.
a) Show that any such mapping is a Möbius Transformation.
b) Does every Möbius transformation arise in this way? If so, give a proof. If not, then describe the subset of Möbius transformations that are obtained from a rotation of the sphere.

Intuitively I understand what is happening, when we rotate the sphere about the $z$-axis we get a rotation of the plane, rotating about another axis will cause a translation, dilatations, etc. I drew sketches of the sphere's, complex plane and different mappings, but I am not sure how I am supposed to prove it mathematically using formulas. As for the second question I also don't know where to start.

Question regarding a ring homomorphism of polynomials with coefficients in a field

Posted: 16 Jan 2022 12:09 AM PST

I have a map $\sigma_a:K[x]\to K[x]$, where $K$ is a field, defined by $g(x)\mapsto g(x+a)$. I'd like to show that this ring homomorphism is in fact an isomorphism. I can easily see the map is surjective; however, I can't seem to be able to show that it is injective. One approach I have taken is given $g(x+a)=g(x+y)$, attempting to show that $x=y$; however, I can only show that $g(x)=g(y)$, and I immediately think of situations like $x^2+1=y^2+1$, where $x$ can equal $1$, while $y$ equals $-1$. Another approach I have considered is showing that the kernel is trivial; however, again I am stuck as I'm not sure what the identity is in this scenario.

I'd like to ask for any help with both approaches as I am sure they will both be informative.

For more context, the purpose of this ring homomorphism is in proving that the polynomial $x^{p-1}+x^{p-2}+\dots+x+1$, where $p$ is a prime number, is irreducible in $\mathbb{Q}[x]$.

Rick Durrett Probability Theory 5th edition exercise 4.6.1

Posted: 16 Jan 2022 12:18 AM PST

Ex 4.6.1: Let $Z_1,Z_2, ...,$ be independent and identically distributed with $E|Z_i|<\infty$, let $\theta$ be an independent random variables with finite mean, and let $Y_i=Z_i+\theta$. If $Z_i$ is normal$(0,1)$ then in statistical terms we have a sample from a normal population with variance 1 and unknown mean. The distribution of $\theta$ is called the prior distribution, and $P(\theta \in \dot\mid Y_1,\ldots,Y_n)$ is called the posterior distribution after n observations. Show $E(\theta \mid Y_1,\ldots,Y_n) \rightarrow \theta$ almost surely.

The problem is from Rick Durrett's Probability Theory 5th edition.

Theorem 4.6.8: Suppose $\mathcal{F}_n \uparrow \mathcal{F}_{\infty}$, i.e., $\mathcal{F}_n$ is an increasing sequence of $\sigma$-fields and $\mathcal{F}_{\infty}=\sigma(\cup_n \mathcal{F}_n)$. As $n \rightarrow \infty,$ $E(X\mid \mathcal{F}_n) \rightarrow E(X\mid \mathcal{F}_{\infty})$ almost surely and in $L^1$.

Attempt: Let $\mathcal{F}_n=\sigma(Y_1,\ldots,Y_n)$, then by theorem 4.6.8, we have $E(\theta \mid \mathcal{F}_n) \rightarrow E(\theta\mid \mathcal{F}_{\infty})$. Then I am told to show $\theta \in \mathcal{F}_{\infty}$ using strong law of large number.

My question lies in using the strong law of large number. Since $Z_i \sim \text{normal}(0,1)$, $E Z_i=0$, then $EY_i=E \theta$. By strong law of large number, $\frac{Y_1+\cdots+Y_n}{n} \rightarrow E\theta $ almost surely, not $\rightarrow \theta$. That's what got me here. Thanks in advance!

Update: https-// math.stackexchange.com/questions/264198/convergence-of-empirical-distribution?rq=1

Based on the link above and Eric (Thanks!)'s comment, I think I got it.

Let $EZ_i=0$, by strong law of large number,

$\begin{align} \lim_{n \rightarrow \infty} \frac{Z_1+...+Z_n}{n}=EZ_i=0 \\ \Rightarrow \theta+\lim_{n \rightarrow \infty} \frac{Z_1+...+Z_n}{n}=\theta \\ \Rightarrow \frac{n\theta}{n}+\lim_{n \rightarrow \infty} \frac{Z_1+...+Z_n}{n}=\theta \\ \Rightarrow \lim_{n \rightarrow \infty} \frac{n\theta+Z_1+...+Z_n}{n}=\theta \\ \Rightarrow \lim_{n \rightarrow \infty} \frac{(Z_1+\theta)+...+(Z_n+\theta)}{n}=\theta \\ \Rightarrow \lim_{n \rightarrow \infty} \frac{Y_1+...+Y_n}{n}=\theta \end{align} $.

Make slope of a sigmoid curve increase linearly

Posted: 16 Jan 2022 12:16 AM PST

Currently I have an equation:

$$ y = \frac{100}{1+100e^{-.12xl*100}*3} -0.332 $$

Where $l$ is a variable between 1-100.

Here is a visualization of the equation:

Graph

When $l$ is set to 1, that is what the graph looks like.

However, when setting it to 2, the graph drastically changes, moving the slope over by $.25$ units on the x-axis. Incrementing $l$ by 1 results in the same behavior, the slope exponentially gets closer to 0, until at $l=10$, the graph almost hits it's apex at $x=.1$. This is behavior I would like to avoid. For example, this is what the graph looks like at $l=10$. This is perfect, except this is what I would like it to look like at $l=100$.

enter image description here

Also, the graph doesn't progress linearly ( sorry if this doesn't make sense, I don't know any other way to explain it). Half of the "movement" between $l=1-10$ is made by $l=2$:

enter image description here

If you put them all in one image, you see that $l=2$ is in the middle of 1-10: (Red is 10, green is 2, blue is one):

enter image description here

Is there a way to make the graph's slope linearly decrease as 1 gets closer to 100, until the slope is almost at $x=0$ at $l=100$?

Regarding an integer being a sum of three primes...

Posted: 16 Jan 2022 12:13 AM PST

If I knew that every even number up to $4 \times 10^{14} $ is the sum of two primes, roughly how many primes would I need to find in order to show that every odd number up to $10^{22}$ is the sum of three primes? Why does this make efficient primality testing important?

Well... if we know that every even number up to $4 \times 10^{14} $ is the sum of two primes, then if we add $3$ to each even number then we get every odd number up to $4 \times 10^{14}+1 $ as a sum of three primes. We can add primes bigger than $3$, say $5$ or $7$ to the even numbers verified to be the sum of two primes, but eventually we would get gaps. I am not sure how we can systematically extend the first assertion about twin primes to the second assertion about prime triplets, and roughly how many primes would I actually need? I am also not really sure about the last question about primality testing.

Asymptotics of second kind Bessel function

Posted: 15 Jan 2022 11:58 PM PST

I would like to obtain asymptotic of $K_{i\nu}(\nu z)$, which the modified Bessel function of the second kind of order $i\nu$ for large $\nu$. The parameter $\nu$ is positive, $\nu>0$ and $z >0$, too.

I have read the corresponding chapter from the book by Olver ("Asymptotics and special functions") (according to DLMF, pages 378-382 from Chapter 8 are relevant). However, my understanding is still poor. Olver suggests as an exercise to derive this asymptotics and states the final answer, $$K_{i\nu}(\nu z)\approx\sqrt{\frac{\pi}{2\nu}}\frac{\exp(-\pi\nu/2-\nu\zeta)}{(z^2-1)^{1/4}}\left(\sum_{s=0}^{n-1}\frac{(-1)^s\hat{U}_s(\hat{p})}{\nu^s}+\phi_n(\nu,z)\right), \tag{*}$$ where $\zeta=\sqrt{z^2-1}-\text{Arcsec}(z)$. I have tried to rederive this expression and in general I understand how it appears.

However Olver refers to original work by Balogh (link). I have read this paper and my understanding disappeared. One of the final results in this paper is the following expression, $$K_{i\nu}(\nu z) = \frac{\pi\sqrt{2}e^{-\pi\nu/2}}{\nu^{1/3}}\left(\frac{\eta}{z^2-1}\right)^{1/4}\left(\text{Ai}(\beta)+\frac{\text{Ai}'(\beta)}{\nu^{4/3}}+...\right),(**)$$ where quantities $\eta$ and $\beta$ are given by $$\frac{2}{3}\eta^{3/2}=\sqrt{z^2-1}-\text{Arcsec}(z),\quad \beta=-\nu^{2/3}\eta.$$

Finally, there is one more paper by Dunster (which is the most of interest for me, link), where the quite similar (up to some redefinitions) expression appears.

In addition, Dunster states that $(**)$ and $(*)$ coincides (cf. (4.8) with (4.25)). I tried my best but still cannot connect $(*)$ and $(**)$ to each other. I do not understand what should I do (I tried to perform large-$\nu$ expansion in Airy functions).

My final goal is to find large $\nu$ asymptotics for $I_{i\nu}(\nu z)$ and $K_{i\nu}(\nu z)$. Can anyone clarify how $(**)$ becomes $(*)$ or simply give more refences with more details?

Further investigation is performed around well-known uniform expansion, $$K_{\nu}(\nu z)=\sqrt{\frac{\pi}{2\nu}}\frac{e^{-\nu\gamma}}{(1+z^2)^{1/4}}\sum_{s=0}^{\infty}(-1)^s\frac{U_s(p)}{\nu^s}, \tag{***}$$ where $U_s(p)$ is some function and $p=(1+z^2)^{-1/2}$. I am not interested in this term). Naively, I can just replace $\nu\rightarrow i\nu$ and $z\rightarrow -iz$ in order to deal with $K_{i\nu}(\nu z)$. Surprisingly for me, it works: I reproduce the result in Olver book (the very first formula). The most tricky moment is to remind that $$\text{Arcsec}(z)=i\ln\frac{z}{1+\sqrt{1-z^2}},$$ which seems correct. Finally, there is the following mapping (based on Olver book, Ch. 8), $$\hat{U}_s(x)=i^sU_s(-ix),\quad \hat{p}=(z^2-1)^{-1/2}.$$

Explicit Local Fundamental Class

Posted: 16 Jan 2022 12:25 AM PST

Let $L/K$ be a Galois extension of local fields of degree $n:=[L:K]<\infty$ with Galois group $G:=\operatorname{Gal}(L/K)$. In short, my question is as follows.

Is there an explicit computation of the fundamental class $u_{L/K}\in H^2\left(G,L^\times\right)$?

Here, the fundamental class is the generator of $H^2\left(G,L^\times\right)$ found by pulling back the generator $\frac1n\in\frac1n\mathbb Z/\mathbb Z$ along the invariant map $\operatorname{inv}_{L/K}:H^2\left(G,L^\times\right)\to\frac1n\mathbb Z/\mathbb Z$.

I record some thoughts below.


In the case where $L/K$ is unramified with Galois group $G:=\operatorname{Gal}(L/K)=\langle\operatorname{Frob}_{L/K}\rangle$, we have an explicit construction of the (local) invariant map $\operatorname{inv}_{L/K}:H^2\left(G,L^\times\right)\to\frac1n\mathbb Z/\mathbb Z$ by $$H^2\left(G,L^\times\right)\stackrel{\operatorname{ord}_L}\to H^2(G,\mathbb Z)\stackrel\delta\leftarrow H^1(G,\mathbb Q/\mathbb Z)\simeq\operatorname{Hom}_\mathbb Z(G,\mathbb Q/\mathbb Z)\stackrel{f\mapsto f(\operatorname{Frob}_{L/K})}\to\frac1n\mathbb Z/\mathbb Z,$$ and then we can pull the (more or less canonical) generator $\frac1n$ back to the (canonical) generator $u_{L/K}\in H^2(G,L^\times),$ which is called the fundamental class. All of this computation can be made explicit (see, for example the discussion preceding Proposition III.1.9 of Milne's notes); for completeness, we record that the computation gives $$u_{L/K}\left(1,\operatorname{Frob}_{L/K}^k,\operatorname{Frob}_{L/K}^{k+\ell}\right)=\begin{cases} \varpi & k+\ell\ge n, \\ 1 & k+\ell<n, \end{cases}$$ where $\varpi$ is a uniformizer of $L.$


In the general case (i.e., dropping the assumption that $L/K$ is unramified), the invariant map is not so explicit. Looking the proofs I've found around, the most explicit construction comes from writing the exact sequence $$0\to H^2\left(\operatorname{Gal}(L/K),L^\times\right)\to H^2\left(\operatorname{Gal}(K^{\text{unr}}/K),K^{\text{unr}\times}\right)\to H^2\left(\operatorname{Gal}(L^{\text{unr}}/L),L^{\text{unr}\times}\right)$$ (It takes some amount of work to define the maps and check that this is in fact a short exact sequence; it is essentially Lemma III.2.2 in Milne's notes.) In theory, we could say that $u_{L/K}$ will be the pull-back of $\operatorname{inv}_{K^{\text{unr}}/K}^{-1}\left(1/n\right)$ to $H^2(G,L^\times)$. However, I have been unable to make this computation explicit.

Zero-divisors vs nilpotents (in Noetherian rings without idempotents)

Posted: 16 Jan 2022 12:02 AM PST

I've looked at earlier similar questions and, as far as I could see, the examples of zero divisors that are not nilpotent are idempotents. I tried to prove that those are the only examples, at least in some cases, but could not. So:

Let $k$ be a field and let $R$ be a finitely generated and reduced $k$-algebra such that the only idempotents in $R$ are $0$ and $1$. Is it the case that the only zerodivisor of $R$ is $0$?

Prove that this ideal is not a subset of this set

Posted: 16 Jan 2022 12:17 AM PST

This question was part of my class notes in commutative algebra and it has been left to solve by itself.

Let $K[X,Y]$ be the polynomial ring over a field $K$, $A'=\left< X^2 ,XY\right> \subseteq K[X,Y]$ and $S=K[X,Y]\setminus\left<X\right>$. Then show that $A'\nsubseteq A'S^{-1} (K[X,Y]) \cap K[X,Y]$.

I think $A'$ should always be a subset of RHS as $A'$ is an ideal but I am asked to prove otherwise.

Can, you please tell what's wrong with my argument and help with proof?

Thanks!

Doubt about the statement of the Fundamental Theorem of Algebra.

Posted: 15 Jan 2022 11:25 PM PST

I have a doubt, not about the proof, but about the nomenclature of a theorem.

I'm researching some interesting applications of the Inverse Function Theorem and found a version of the Fundamental Theorem of Algebra, the proof of which applies that theorem. The version is as follows:

Fundamental Theorem of Algebra: If $p:\mathbb{C} \to \mathbb{C}$ is the nonconstant polynomial defined by $$p(z) = a_0 + a_1z + \ldots + a_nz^n,$$ where $a_n \neq 0$ and $n \geq 1$, then $p$ is surjective. In particular, there exists $z_0 \in \mathbb{C}$ such that $p(z_0) = 0$.

I found this version a bit "strange" as it looks weaker than the following:

Fundamental Theorem of Algebra: If $p:\mathbb{C} \to \mathbb{C}$ is a nonconstant complex polynomial defined by $$p(z) = a_0 + a_1z + \ldots + a_nz^n,$$ where $a_n \neq 0$ and $n \geq 1$, then the equation $p(z) = 0$ has $n$ solutions, not necessarily distinct.

Is the first version I listed correct? Are they equivalent? To me they don't seem to be, the first seems to be weaker than the second.

Taking advantage of the question, I would like to ask for suggestions for the application of this theorem (of Inverse Function in the case).

Uniform convergence of sequence of partial sums.Help please

Posted: 16 Jan 2022 12:00 AM PST

Show that each sequence of partial sums and its derivative converges uniformly on their respective intervals.

a) $$ S_n(x)=\sum_{k=1}^n \frac{1}{(1 + nx)^2}, x \in (0,\infty)$$

$$ S_n'(x)=\sum_{k=1}^n \frac{-2}{(1 + nx)^3}, x \in (0,\infty)$$

b)$$S_n(x)=\sum_{k=1}^n e^{-nx}, x \in(0,\infty)$$

$$S_n'(x)=\sum_{k=1}^n -ne^{-nx}, x \in(0,\infty)$$

for part a) I believe I can make use of the following $$\sum_{n=1}^\infty \frac{1}{(1+nx)^2} < \sum_{n=1}^\infty \frac{1}{(nx)^2} = \frac{1}{x^2}\sum_{n=1}^\infty \frac{1}{n^2}$$ so the infinite series converges.I know that I need to use Weierstrass M-Test and the Comparison test to show that the sequences $\{S_n(x)\}$ and $\{S_n'(x)\}$ converge uniformly on $(0,\infty)$

Similarly, for part b) $$\sum_{n=1}^\infty e^{-nx} = \sum_{n=1}^\infty \left(e^{-x}\right)^n$$ which shows the infinite series is a geometric series and converges.

I need help showing that $\{S_n(x)\}$ and $\{S_n'(x)\}$ converge uniformly on $(0,\infty)$ for part a) and b).I think I need to show that it converges uniformly on$[a,\infty)$ for every $a>0$ which would hold for all $x\in(0,\infty)$.But im not sure..help anyone?

No comments:

Post a Comment