Tuesday, April 13, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Prove difference of infinite series of decreasing function and its integral converges

Posted: 13 Apr 2021 07:42 PM PDT

$f(x)$ is continuous and decreasing on $[0, \infty]$, and $f(n) \to 0$. Let ${a_n} = f(0) + f(1) + ... + f(n-1) - \int^{n}_{0}f(x)dx$. Show $a_n$ converges (from Mattuck Analysis).

Solution:

For any continuous function $f$ and integers $a, b$, with $b > a$, let $L(f, a, b)$ be the Riemann sum of the lower bound of each unit subinterval $[a, a+1], [a+1, a+2]...[b-2, b-1], [b-1, b]$, and $U(f, a, b)$ be the Riemann sum of the upper bound of each such unit subinterval. Then, by definition of integral, $L(f, a, b) \leq \int_a^b f(x)dx \leq U(f,a,b)$. Since $f(x)$ is decreasing, for any unit subinterval $[m, m+1]$, $f$'s upper bound is $f(m)$ and its lower bound is $f(m+1)$. Therefore, $a_n = U(f,0,n) - \int^n_0f(x)dx$, and $a_{n+m} = a_n + U(f, n, n+m) - \int^{n+m}_n f(x)dx$.

Furthermore, note that $f$ is always nonnegative, since it is decreasing and has limit zero.

Since $f(n) \to 0$, for any $\epsilon > 0$, there exists $n$ such that for any $m > 0$, $f(n+m) < \epsilon$, so $0 \leq L(f, n, n+m) \leq \int_n^{n+m}f(x) \leq U(f, n, n+m) < m\epsilon$. Therefore, for any $\epsilon > 0$, there exists $n$ such that for any $m >0$, $|a_{n+m} - a_n| < \epsilon$, and $a_n$ is Cauchy and converges to a limit. QED.

left adjoint preserve join applicable on empty set?

Posted: 13 Apr 2021 07:35 PM PDT

In book: https://math.mit.edu/~dspivak/teaching/sp18/7Sketches.pdf 2.87(b) & answer of 2.104(1) assume that left adjoint preserve join applicable on empty set, i.e. $ \left( v \otimes \bigvee_\left(a \in A \right)a\right) \cong \bigvee_\left(a \in A\right)(v \otimes a)$ where $A \subseteq V$.

which estimator is better?$S’^2$ or $S^2$

Posted: 13 Apr 2021 07:43 PM PDT

As one is biased while another one is unbiased. Like $S'^2$ and $S^2.$ To compare their mean square error or variance or whether it is unbiased?

N variables Quadratic Form matrix operations proof

Posted: 13 Apr 2021 07:34 PM PDT

I have to show that if I start in a quadratic form of $n$ variables:

$f (x_1, ..., x_n) = \sum_{i=1}^n a_{i}x_{i}^2 + \sum_{(j<i, j=2)}^n a_{ij}x_{i}x_{j} \quad = \quad \textbf{x}^T\cdot\textbf{A}\cdot\textbf{x}$

and prove that $A\in \mathbb{R}^{n\cdot n}$ is a symmetric matrix with diagonal elements given by $a_i$ and non-diagonal elements given by $a_{ij} / 2$.

I already know why $A$ is symmetric and how is people customary to represent it, but how can I come from my general expression $f (x_1, ..., x_n) $ to the representation of $\textbf{x}^T\cdot\textbf{A}\cdot\textbf{x}$ and also show the diagonal elements of the matrix.

Advanced Statistical Modeling (estimator and limiting distribution)

Posted: 13 Apr 2021 07:28 PM PDT

Is a signed measure finite if it is defined as the integral of a function with respect to a finite measure?

Posted: 13 Apr 2021 07:35 PM PDT

Suppose $(X, M, \mu)$ is a complete finite measure space with $f\in L^{1}(X, M, \mu)$. Define the signed measure $v(E) = \int_E f d\mu$.

Since we know that $\mu(E)<\infty$, do we also have that $-\infty< v(E)<\infty$ for all $E\in M$?

My thoughts are that, if $f$ ever achieved values of $-\infty$ or $\infty$, then we could have an infinite integral and hence the signed measure would not be finite for some set E ... however, I think this would contradict $f$ being $\mu$-integrable. I think that, if $f$ is integrable, then it should only have finite values ... then, since $f(x)$ is finite for all $x$ in consideration and $\mu(E)$ is finite for all $E$ in consideration, then the integral should be finite and hence the signed measure is finite.

Can someone clarify this for me? I have not been successful in finding an answer and so far, all I have to go on is my intuition (which is often times wrong). Links to additional resources are welcomed! Thank you in advanced!

Generalized way to find mathematical symbols?

Posted: 13 Apr 2021 07:31 PM PDT

I have this problem frequently in math. Someone will decide to use some new symbol and you want to add it to your notes but without the name this has proven difficult. In my case, I'm currently reading a book on machine learning and ran across this:

enter image description here

Is there a good generalized way to find out the names of these (sometimes obscure) mathematical symbols? I just Wikipedia Greek letters and if it's not there I'm never sure where to look after.

PS: Willing to update the tags - couldn't find a good one to fit the question.

Jordan Canonical form and change of basis matrix (vector order)

Posted: 13 Apr 2021 07:22 PM PDT

this is my first question here. I'm having trouble understanding Jordan canonical form, specifically the order of vectors in the change of basis matrix, I read another question here that didn't make it clear for me and I tried experimenting with this to see if I could understand:

$A=\begin{pmatrix}2&0&0&0\\0&2&0&0\\0&0&2&1\\1&0&0&2\end{pmatrix}$

So, here we have:

$null(A-2I) = span\{\begin{bmatrix}0\\1\\0\\0\end{bmatrix};\begin{bmatrix}0\\0\\1\\0\end{bmatrix}\}$

$null((A-2I)^2) = span\{\begin{bmatrix}0\\1\\0\\0\end{bmatrix};\begin{bmatrix}0\\0\\1\\0\end{bmatrix};\begin{bmatrix}0\\0\\0\\1\end{bmatrix}\}$

$null((A-2I)^3) = span\{\begin{bmatrix}1\\0\\0\\0\end{bmatrix};\begin{bmatrix}0\\1\\0\\0\end{bmatrix};\begin{bmatrix}0\\0\\1\\0\end{bmatrix};\begin{bmatrix}0\\0\\0\\1\end{bmatrix}\}$

In that question, they used the Jordan Matrix $J=\begin{pmatrix}2&1&0&0\\0&2&1&0\\0&0&2&0\\0&0&0&2\end{pmatrix}$

But I wanted to try and change it a bit so I used $J=\begin{pmatrix}2&0&0&0\\0&2&1&0\\0&0&2&1\\0&0&0&2\end{pmatrix}$

(I swapped the boxes)

And I ordered the vectors of the base differently to make it work $B=\{(0,1,0,0);(0,0,1,0);(0,0,0,1);(1,0,0,0)\}$

So the change of basis matrix now is $P=\begin{pmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\\1&0&0&0\end{pmatrix}$

I thought that $A=PJP^{-1}$ . But instead, $PJP^{-1}=$$\begin{pmatrix}2&1&0&0\\0&2&1&0\\0&0&2&0\\0&0&0&2\end{pmatrix}$, which is the Jordan Matrix with the boxes in a different order, the one they used in the other question, and I also noticed that if instead of doing $PJP^{-1}$ I did $P^{-1}JP$, I actually got A, why does this happen?

That's what I'm trying to understand and I'm looking for a clear way to understand the way to proceed when trying to get the change of basis matrix $P$ such that $A=PJP^{-1}$, especially the order of the vectors depending on the Jordan Form I choose

Is $ \frac{x^2 -4}{x-2} $ a polynomial?

Posted: 13 Apr 2021 07:19 PM PDT

$ \frac{x^2 -4}{x-2} $ doesn't looks like a polynomial because of $ x $ in the denominator. But it can be factorized. After factorizing and cancel the term $ \frac{x^2 -4}{x-2} = \frac{(x+2)(x-2)}{x-2} = x+2 $, $ x+2 $ is a polynomial which satisfies the definition of the polynomial.

Whether the original expression $ \frac{x^2 -4}{x-2} $ is a polynomial or not?

Distribution of minimum of Uniform products

Posted: 13 Apr 2021 07:26 PM PDT

I've been stuck on this question for a while.

{$U_{i}$ iid, Uniform(0,1)
$V_{n} = \prod_{1}^{n} U_i$, n = 1, 2,...
$N = \min \{k: V_k < 0.1\} $
Find the distribution of $N$

I know that this is supposed to be related somehow to a poisson distribution based off of other things that I have learnt, but that was with the summation of uniform distributions, not the product. I honestly just don't know where to begin. Any help would be very much appreciated!

Prove that $\prod_{i\geq 1}\frac{1}{1-xy^{2i-1}} = \sum_{n\geq 0} \frac{(xy)^{n}}{\prod_{i=1}^{n}\left( 1-y^{2i} \right)}.$

Posted: 13 Apr 2021 07:14 PM PDT

Prove that $$\prod_{i\geq 1}\frac{1}{1-xy^{2i-1}} = \sum_{n\geq 0} \frac{(xy)^{n}}{\prod_{i=1}^{n}\left( 1-y^{2i} \right)}.$$

Here I am trying the following \begin{align*} \prod_{i\geq 1}\frac{1}{1-xy^{2i-1}} &= \prod_{i\geq 1} \left( \sum_{n \geq 0} x^{n}y^{n(2i-1)} \right)\\ &= \sum_{m\geq 0} \sum_{n \geq 0} p(2m-1,n)x^{n}y^{2m-1}\\ &= \sum_{n\geq 0}\left( \sum_{m \geq 0} p(2m-1,n)y^{2m-1}\right)x^{n} \\ &= \sum_{n\geq 0} \frac{y^{n}}{\prod_{i=1}^{n}\left( 1-y^{2i} \right)}x^{n}\\ \end{align*} However, I am not sure if this is totally correct.

Finding $k$ so that $y= kx^2-\frac1{x^2}$ has an inflection point at $x=1$

Posted: 13 Apr 2021 07:25 PM PDT

Find the constant $k$ so that the graph of $y= kx^2 - \frac1{x^2}$ has an inflection point at $x=1$.

Using Green's Theorem, calculate the integral $\oint_{\ C}(3x +4y)dx + (2x-3y)dy $

Posted: 13 Apr 2021 07:18 PM PDT

Using Green's Theorem, calculate the closed integral over

$$\oint_{C}(3x +4y)\text{ }dx + (2x-3y)\text{ }dy$$

where $C$, a circle of radius two with center at the origin of the $xy$-plane, is traversed in the positive sense.

My approach: From Green's Theorem,

$$\oint_{C}(3x +4y)\text{ }dx + (2x-3y)\text{ }dy = \iint_A \left(-\frac{\partial}{\partial y}(3x+4y) + \frac{\partial}{\partial x}(2x-3y)\right) dx \,dy$$

I'm just having some trouble figuring out the bounds and boundary of the following integral. I saw that the answer is $-8\pi$, and can't see how to get it.

Enumerability of real numbers

Posted: 13 Apr 2021 07:26 PM PDT

I know that the set of real numbers is not enumerable. However, is the set of ordered pairs of real numbers enumerable? Right now, my train-of-thought is like this: Since the set of real numbers is not enumerable, hence, since the ordered since the set $\mathbb R^2$ is a subset of the set of real numbers, the set $\mathbb R^2$ is not enumerable.

My question would be does being a subset of a non-enumerable set makes the set non-enumerable?

Finding the probability of iid bernoulli random variable

Posted: 13 Apr 2021 07:29 PM PDT

enter image description here

$$Z_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n (X_i - \mu)$$

Suppose {$X_i$}~Bernoulli($1/3$) and $μ=E(X_i)$ What is the probability that $Z_n ≤ 0.924$?

Linear approximation and differentials

Posted: 13 Apr 2021 07:44 PM PDT

Question:
Given $e^y + y(x-2) = x^2 - 8$
Use a linear approximation to estimate $y$ when $x = 2.98$. Is the exact answer larger or smaller than your estimate? (optional) Use a sketch to justify your answer.

Number of ways $k$ identical objects can be distributed in $n$ different boxes

Posted: 13 Apr 2021 07:35 PM PDT

In how many ways $k$ identical objects can be distributed in $n$ different boxes so that for each $1 \leq i \leq n$ we have

$$ 2 x_i \leq k+1, $$ for all $1\leq i \leq n$. Roughly speaking, we don't want a box to contain more than half of the objects.

recursion im just very dumb

Posted: 13 Apr 2021 07:16 PM PDT

I overthought the question thanks for the help.

Show that $K$ is undecidable

Posted: 13 Apr 2021 07:39 PM PDT

Let $$K = \{\langle W \rangle: W \text{ est Turing-Machine (working of the alphabet } \{a, b, c\}\text{) and }$$ $$L(W) \text{ contains at least one word where there's no a}\}$$

I know that I can suppose that $K$ is decidable (contradiction) and show that $A_{TM} = \{〈M, w〉: M \text{ is Turing Machine and M accepts w} \}$ is also decidable (which is a contradiction). I just don't know how to continue. Can you help from here?

I know that the proof must obtain an algorithm which decides the $A_{TM}$ problem. We must therefore show how decide if a certain machine M accepts a certain w.

The algorithm follows the following scheme: given $〈M, w〉$,

  1. Build a question $X_{M,w}$ for the algorithm $D$ which decides $K$.
  2. Receive the result of $D$.
  3. Reinterpret this result to decide if $M$ accepts $w$.

My problem here is I just don't know how to build $X_{M,w}$.

$\text{Cov}(A|X,Y) = \text{Cov}(Y|A,X) \text{Cov}(Y|X)^{-1}\text{Cov}(A|X)$ using Bayes rule

Posted: 13 Apr 2021 07:14 PM PDT

Given a Bayes rule for density functions, \begin{align} f(a|x,y) = \frac{f(y|a,x)}{f(y|x)}f(a|x) \end{align} where the joint distribution is Gaussian, I aim to conclude \begin{align} \text{Cov}(A|X,Y) = \text{Cov}(Y|A,X) \text{Cov}(Y|X)^{-1}\text{Cov}(A|X) \end{align} Is it straightforward? How can we show that for the vector case, i.e., each random variable is a random vector?

The motivation is the error estimate before and after observing a single measurement. In part, $A$ is the source, $Y$ is the new measurement and $X$ is the old measurement.

Reductive group terminology questions

Posted: 13 Apr 2021 07:28 PM PDT

$\DeclareMathOperator{\Hom}{Hom}$ $\newcommand{\g}{\mathfrak{g}}$

I am a beginner in the subject of reductive groups and I am hoping someone might be able to walk me through some basic terminology. This is not homework so any and all remarks, even those not directly related to what I ask, are helpful. In some sense I am asking many questions, so if moderators wish that I ask separate questions, I will of course do so instead. But I view all of these as trivially tied together and I think it will be easy for someone knowledgeable to answer them all rapid-fire.

Let $k$ be an algebraically closed field of arbitrary characteristic. Let $G$ be a connected reductive group over $k$. Let $B=TU$ be a fixed Borel subgroup of $G$, where $T$ is a torus and $U$ is the unipotent radical of $B$. Let $X^*(T):=\Hom(T, GL_1)$ be the group of characters or the weight lattice and $X_*(T):=\Hom(GL_1, T)$ be the group of cocharacters or the coweight lattice. Now let $\g$ be the Lie algebra of $G$. The adjoint representation of $G$ is given by conjugation on $\g$. A root is a nontrivial weight of $G$ that occurs in the action of $T$ on $\g$. The choice of $B$ determines a set of positive roots. A positive root that is not the sum of two positive roots is called a simple root. If everything I have written so far makes sense, then I am pretty good up until here.

My questions are the following:

  1. What is the meaning of a coroot? I have been told that there is a pairing between the weight and coweight lattices, and those coweights corresponding to the roots are called coroots. What precisely is this pairing and what is the meaning of "corresponding to?" And what would the coroots for, say, $GL_n$ look like (with the standard choices of $B$ and $T$)?

  2. Similar to #2, what is a positive coroot and a simple coroot? I have some good guesses but I want to be sure.

  3. What is a dominant weight and a dominant coweight? Why would we care about such weight/coweights?

  4. Are there only finitely many dominant weights/coweights?

  5. Finally, what are the fundamental weights and fundamental coweights? It seems that these are necessarily dominant, but I am not sure what their precise definition is. Are there only finitely many of these?

Any and all hints or remarks that you feel may clarify the situation are of course welcome.

does a pair (a,b) exists where both a and b are non-negative, one of them is a perfect square other is not, and their product is a perfect square?

Posted: 13 Apr 2021 07:45 PM PDT

is it possible to have a pair (a,b) where

  1. Both a and b are non-negative
  2. One of them is a perfect and square other is not
  3. Their product is (a.b) is a perfect square

If P = NP, then the following problem would be solvable in polynomial time?

Posted: 13 Apr 2021 07:46 PM PDT

The problem is the following (searching a database with no structure):

There exists a function $f$, that maps a bitstring $z$ to a binary value. There is only one marked element $x$, such that $f(x) = 1$. For all other inputs $y \neq x$ it holds that $f(y) = 0$. The goal is to find the input $x$.

The function $f$ is a blackbox / oracle: its implementation in terms of logic gates, for example, is not known and each evaluation of $f$ (call to the oracle) takes a constant amount of time.

This problem can then be casted into a decision variant. Given a solution $x$ to the problem, it can be checked in constant amount of time, simply by evaluation of $f(x)$.

Guessing the inputs will take exponential time. So this "algorithm" is in NP.

Would P = NP imply that there is a classical algorithm on a Turing machine, that solves this problem in polynomial time? Can't it be quite easily proved that there exists no polynomial algorithm here? (Time complexity being polynomial in the number of input bits)

Applying the additive property to factor into an expression

Posted: 13 Apr 2021 07:21 PM PDT

I want to understand how the additive identity property is used to factor:

$x^2 + 6x +9 - 9 = (x + 3)^2 - 9$

  1. $x^2 + 6x +9 - 9 = (x^2 + 6x +9) - 9$

What is the algebraic priniciple governing how terms can be grouped together in parenthesis?

  1. $(x^2 + 6x +9) - 9 = (x + 3)^2 - 9$

How does this first expression factor ino the 2nd expression?

$\sum_{n=1}^{\infty}\frac{\sin(nx)}{\sqrt{n}} \in L^{1}(0,2\pi)$

Posted: 13 Apr 2021 07:42 PM PDT

I need to prove that $f(x)=\sum_{n=1}^{\infty}\frac{\sin(nx)}{\sqrt{n}} \in L^{1}(0,2\pi).$

By Dirichlet's test $\sum_{n=1}^{\infty}\frac{\sin(nx)}{\sqrt{n}}$ converges uniformly on intervals $[\delta, 2\pi-\delta], \delta>0$. I don't know how to prove that $f \in L^1$ but I thing the following fact may be useful. $$\int\frac{\sin(nx)}{\sqrt{n}}dx=-\frac{\cos(nx)}{n\sqrt{n}}$$ and $F(x)=\sum \frac{\cos(nx)}{n\sqrt{n}}$ converges uniformly and absolutely on $[0,2\pi]$ by Weierstrass M-test.

Thank you for any help!

Estimating a bivariate moment generating function

Posted: 13 Apr 2021 07:19 PM PDT

Assume there are many identical and independent sample pairs e.g. $(X_1, Y_1), (X_2, Y_2), (X_3, Y_3), \dots, (X_n, Y_n)$.

How do you consistently estimate the following function $M(t_1, t_2)$, such that the estimator converges in probability to the function?

Function here: $M(t_1,t_2) = \mathbb E\left[e^{t_1X+t_2Y}\right]$ https://i.stack.imgur.com/JOHgT.jpg

$M(t_1, t_2)$ is a bivariate moment-generating function.

Entire question here for reference: https://i.stack.imgur.com/Fxjsv.png

Thank you!

Geometric fundamentals of a variable [closed]

Posted: 13 Apr 2021 07:18 PM PDT

Image

Hi, first post on here. Sorry this seems like a basic question compared to other posts.

As seen in this graph manipulations of ${x}$ take different paths approaching ${x}$. Are the geometric shapes fundamental to the operations? I'm really trying to get a good core understanding.

Edit for clarity:

Before all points meet at (1, 1) the lines approaching (1, 1) are different. If I use the coordinates (0, y to 0, 1), (0, x to 1) and (1, 1) as a frame of reference I can see that each function relates to a different shape within that frame of reference as x is approaching 1.

Context:

I'm looking at patterns in irrational numbers and their geometric operations. Looking at ${x^2}$ for instance; is this curve a rotation or completely linear fundamentally and curved when concatenated with an exponential function?

I'm not a mathematician so it's very hard to speak the language fluent enough to get my question across effectively.

How to confirm $\phi(F_1(x,y))=F_2(\phi(x),\phi(y))$,where $F_1$ and $F_2$ are formal group law of elliptic curve $E_1$, $E_2$.

Posted: 13 Apr 2021 07:34 PM PDT

This question is from Silverman's 'the arithmetic of elliptic curves',$p134$.

Let $K$ be a field of characteristic $p > 0$, let $E_1/ K$ and $ E_2/K$ be elliptic curves, and let $\phi : E_1 \to E_2$ be a nonzero isogeny defined over $K$. Further, let $f: \hat{E_1} \to \hat{E_2} $ be the homomorphism of formal groups induced by $\phi$.

My question:

How does an isogeny $\phi$ on elliptic curves induces a homomorphism of corresponding formal groups?

I guess $f(T)=\phi(T)$,but I cannot check this is actually homomorphism.

That is, I would like to confirm

$\phi(F_1(x,y))=F_2(\phi(x),\phi(y))$,where $F_1$ and $F_2$ are formal group law of elliptic curve $E_1$, $E_2$.

Gauss-Bonnet theorem for vector bundles on manifolds with boundary

Posted: 13 Apr 2021 07:24 PM PDT

I hope this question is not a duplicity, but I really failed to find a good reference for it.

I am wondering whether there is a generalization of the Gauss-Bonnet theorem to real vector bundles on a manifold with a boundary. The special case of tangent bundles is well studied -- it is well known that integration of the Euler class of the tangent bundle reduces to the good old Gauss-Bonnet theorem on the underlying manifold. But is it possible to generalize Gauss-Bonnet to an arbitrary real vector bundle on a manifold with a boundary, i.e. not necessarily the tangent one?

I would be grateful for a suitable reference. As a start, I would be completely satisfied with the case of a 2-dimensional bundle on a 2-dimensional manifold with a boundary (i.e. no need to complicate this most elementary story with the higher-dimensional Gauss-Bonnet-Chern generalization).

Fixed point of Riemann Zeta function

Posted: 13 Apr 2021 07:28 PM PDT

I have been looking for fixed points of Riemann Zeta function and find something very interesting, it has two fixed points in $\mathbb{C}\setminus\{1\}$.

The first fixed point is in the Right half plane viz. $\{z\in\mathbb{C}:Re(z)>1\}$ and it lies precisely in the real axis (Value is : $1.83377$ approx.).

Question: I want to show that Zeta function has no other fixed points in the right half complex plane excluding the real axis, $D=\{z\in\mathbb{C}:Im(z)\ne 0,Re(z)>1\}$.

Tried: In $D$ the Zeta function is defined as, $\displaystyle\zeta(s)=\sum_{n=1}^\infty\frac{1}{n^s}$. If possible let it has a fixed point say $z=a+ib\in D$. Then, $$\zeta(z)=z\\ \implies\sum_{n=1}^\infty\frac{1}{n^z}=z\\ \implies \sum_{n=1}^\infty e^{-z\log n}=z\\ \implies \sum_{n=1}^\infty e^{-(a+ib)\log n}=a+ib$$ Equating real and imaginary part we get, $$\sum_{n=1}^\infty e^{-a\log n}\cos(b\log n)=a...(1) \\ \sum_{n=1}^\infty e^{-a\log n}\sin(b\log n)=-b...(2)$$ Where $b\ne 0, a>1$.

Problem: How am I suppose to show that the relation (2) will NOT hold at any cost?

Any hint/answer/link/research paper/note will be highly appreciated. Thanks in advance.

Please visit this.

No comments:

Post a Comment