Friday, April 9, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Given a canonical Jordan form respond each question

Posted: 09 Apr 2021 08:30 PM PDT

Sea $T$ a linear operator on a subspace finite dimensional $V$ such that the Jordan Canonical Form is

$$\begin{pmatrix} 2 & 1 & 0 & 0 & 0 &0 & 0 &0 &0 & 0 \\ 0 & 2 & 1 & 0 & 0 &0 & 0 &0 &0 & 0 \\ 0 & 0 & 2 & 1 & 0 &0 & 0 &0 &0 & 0 \\ 0 & 0 & 0 & 2 & 0 &0 & 0 &0 &0 & 0 \\ 0 & 0 & 0 & 0 & 2 &0 & 0 &0 &0 & 0 \\ 0 & 0 & 0 & 0 & 0 &3 & 1 &0 &0 & 0 \\ 0 & 0 & 0 & 0 & 0 &0 & 3 &0 &0 & 0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 &3 &0 & 0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 &0 &0 & 1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 &0 &0 & 0 \\ \end{pmatrix}$$

i) Find the characteristic polinomyal

ii) For wich eingevalues the eigenspace is equal to the generalized eigenspaces?

iii) For each eigenvalue $\lambda_{i}$, find the minimal positive integer $p_{1}$ such that the generalized eigenspace $K_{\lambda_{i}}$ asociated to $\lambda_{1}$ match with the kernel of $(T-\lambda_{i}I)^{p_{1}}$, i.e $K_{\lambda_{i}}= Ker(T-\lambda_{i}I)^{p_{1}})$.

iv)Let $U_{i}$ the restriction of $T-\lambda_{i}I$ a $K_{\lambda_{i}}$ for each $i$. Find for each $i$, the dimension of $Ker(U_{i}),Ker(U_{i}^{2}),Ker(U_{i}^{3})$.

My solution

i) $p(x)=x^{2}(x-2)^{5}(x-3)^{3}$

ii) I think that for none of the three, because each one of them have a Jordan's block with size greater to $1$.

iii) The Jordan's basis of $\lambda_{1} ={0}$ only have two vector so $p_{1}=0$. For $\lambda_{2}=2$ we have that the jordan's block have size $4$ so the Jordan's basis is $\{v,(T-2I)v,(T-2I)^{2}v,(T-2I)^{3}v\}$, so $p_{2}=3$. By last $\lambda_{3}=3$ have $p_{3}=1$.

iv) I didn't understand how to do it, can you help me?

And can you recommend any lecture for thie theme?

What is this topological property? A form of local connectedness?

Posted: 09 Apr 2021 08:26 PM PDT

Let $S$ be a subset of some topological space $X$.

What is known about the following property of $S$? Does it have a name? Is it standard?

Property: For all $x\in \bar{S}$ (the closure of $S$), and for all open sets $U$ with $x\in U$, there exists an open set $V$ with $x\in V\subseteq U$ such that $V\cap S$ is connected.

If $S$ is closed, then this just means that $S$ is locally connected. But for non closed $S$, the two concepts do not agree. (Both $\mathbb{R}\setminus \{0\}$ and its closure are locally connected, but $\mathbb{R}\setminus \{0\}$ does not possess this property.)

True/False : A function is irreducible in an UFD and its quotient ring or not?

Posted: 09 Apr 2021 08:22 PM PDT

The Statement is as follows, and I know that the answer would be False, but I am not able to deduce any solid counter example to show that clearly.

Let R be an UFD and K be the quotient field of R. If f(x) ∈ R[x] is reducible in R[x] then f(x) is reducible in K[x].

Proving finitely many.

Posted: 09 Apr 2021 08:29 PM PDT

I want to prove that: If $N$ is finitely generated semi-simple $R-$module, then $N$ is a sum of finitely many simple submodules.

I know that if $N$ is a finitely generated $R-$module, then that the following three conditions are equivalent(I have seen its proof before).

  1. $N$ is a sum of simple modules.

  2. $N$ is a direct sum of simple modules.

  3. Given any submodule $M \subset N,$ there exists a unique submodule $M' \subset N$ such that $N = M \oplus M'.$

And any module satisfying these conditions is called a semi-simple module.

Then, to prove my first statement above, I know that I can adjust my proofs either for $2 \implies 1$ in the statement above or for $3 \implies 1.$ I prefer to prove my first statement to prove that: $N$ is a direct sum of simple modules implies $N$ is a sum of finitely many simple modules.

Here is my trial:

Assume that $N$ is a direct sum of simple modules, then, by definition of direct sum, $N$ is a sum of simple modules. But then how can I prove that it is a finite sum? could anyone help me in proving this please

What is Linearly dependency

Posted: 09 Apr 2021 08:26 PM PDT

Say I have 3 vectors a, b and c.

And I know that a is NOT linear independent to either b or c, is there a chance that b and c are linear independent of each other?

reflexivity of Bochner space

Posted: 09 Apr 2021 08:22 PM PDT

Let $(\Omega, \mathscr F, \mu)$ be a $\sigma$-finite measure space and $X$ be a Banach space, and assume that $X$ has the Radon-Nikodym property with respect to $(\Omega, \mathscr F, \mu)$. I'm trying to prove the reflexivity of the Bochner space $L^p(\Omega;X)$ when $1< p <\infty$. The following is my proof:

Let $1/p + 1/q =1 $ and consider the map $\Phi_{p,X} : L^q(\Omega; X^*) \to L^p(\Omega; X)^* $ defined by $$\langle f, \Phi_{p,X}g \rangle := \int_\Omega \langle f,g\rangle \,d\mu, \quad f\in L^p(\Omega; X),\;g\in L^q(\Omega; X^*) .$$ Then by the Radon-Nikodym property, $\Phi_{p,X}$ is an isometric isomorphism. Note that, for all $f\in L^p(\Omega; X)$ and $g\in L^q(\Omega; X^*) $ we have $$ \langle f, \Phi_{p,X}g \rangle = \int_\Omega \langle f,g\rangle \,d\mu = \int_\Omega \langle g,j_Xf\rangle \,d\mu = \langle g, \Phi_{q,X^*}(j_Xf) \rangle, $$ where $j_X:X\to X^{**}$ is the canonical injection. Now, let $J:L^p(\Omega; X)\to L^p(\Omega; X)^{**} $ be the canonical injection. Then for all $\Lambda \in L^p(\Omega; X)^* $ and $f\in L^p(\Omega; X) $, we have $$ \langle \Lambda, Jf \rangle = \langle f,\Lambda \rangle = \langle f,\Phi_{p,X}\Phi_{p,X}^{-1}\Lambda \rangle = \langle \Phi_{p,X}^{-1}\Lambda,\Phi_{q,X^*}(j_Xf) \rangle = \langle \Lambda,(\Phi_{p,X}^{-1})^*\Phi_{q,X^*}(j_Xf) \rangle, $$ where $(\Phi_{p,X}^{-1})^*$ is the adjoint operator of $\Phi_{p,X}^{-1}$. Therefore $J$ is surjective. That is, $L^p(\Omega; X)$ is reflexive.

Is the above proof correct ?

Cramer-Rao Lower Bound for Adjusted Poisson MLE

Posted: 09 Apr 2021 08:13 PM PDT

For X$_i$ iid Poisson($\theta$), let y($\theta$)=$\theta$e$^{-\theta}$ be the function when X=1. Since, normally, the MLE of $\theta$ for a Poisson is $\bar{x}$ = $\frac{1}{n}$$\sum_{i=1}^{n} X_i$ , I have used the invariance property to determine that the MLE of y($\theta$) is $\hat{\theta}$e$^{-\hat{\theta}}$ = ($\frac{1}{n}$$\sum_{i=1}^{n} x_i$)e$^{-\frac{1}{n}\sum x_i}$ = W(X).

Next, I am trying to find the Cramer-Rao lower bound for the variance of this MLE. As part of this calculation, I need to calculate $\frac{d}{d\theta}$E$_\theta$(W(X)).

I am stuck on the E$_\theta$(W(X)). part. This would be E$_\theta$[($\frac{1}{n}$$\sum_{i=1}^{n} x_i$)e$^{-\frac{1}{n}\sum x_i}$]. I know normally for Poisson($\theta$), that E[$\sum_{i=1}^{n}$X$_i$]=n$\theta$, but I am not sure where this comes into play here. Any thoughts?

Optimisation problem with respect to approximation problem.

Posted: 09 Apr 2021 08:09 PM PDT

My question is : can we express or solve an optimisation problem with respect to an approximation problem. Kindly explain your answer.Thnk you

Calculating the work done to pump water

Posted: 09 Apr 2021 08:16 PM PDT

I would like to know whether my reasoning is ok. The problem is the following:

A horizontal cylindrical tank of radius $r = 4\ m$ and length $L = 10\ m$ is half full of water. What is the work done in pumping the water to the top of the tank?

Consider: $g = 10\ m\cdot s^{-2}$.

I calculate the center of mass when the water is in the down half, and the center of mass when it is in the upper half, and finally, I calculate the work with $W = mgh$, where I take $h$ as the distance between both centers of mass.

Is this approach correct?

Matrix reconstruction puzzle

Posted: 09 Apr 2021 08:03 PM PDT

Say a reconstruction of matrix $A$ is $A'$ and it's defined as

$ A' = P^TDPA $

where $P$ is an orthonormal matrix, $D$ is a diagonal binary (1 or 0) matrix. In a trivial case, when all diagonal elements are 1, we have a perfect reconstruction ($A'=A$).

Now we constrain the number of 1's in the diagonal entries of $D$ to, say, $n$. How do I find the best $D$ s.t. $Tr(D)=n$ that would minimize $||A-A'||$?

I think I need to inspect the singular vectors of $A$, but I am not sure what to do exactly.

Trilinear inequality

Posted: 09 Apr 2021 08:03 PM PDT

Assume that a sample is used to estimate a population mean μ. Find the 90% confidence interval for a sample of size 33 with a mean of 40.3 and a standard deviation of 20.5. Enter your answer as a tri-linear inequality accurate to one decimal place (because the sample statistics are reported accurate to one decimal place).

An equivalence relation gives rise to a partition

Posted: 09 Apr 2021 08:01 PM PDT

I am trying to solve exercise 1.2 in Aluffi's text "Chapter 0."

Prove that if $\sim$ is a relation on a set $S$, then the corresponding family $\mathscr{P}_{\sim}$ defined in $\S 1.5$ is indeed a partition of $S$: that is, its elements are nonempty, disjoint, and their union is $S$.

Here is my attempt.

By definition, we have $\mathscr{P}_{\sim} = \{[a]_{\sim} \mid a \in S\}$. Each $[a]_{\sim} \in \mathscr{P}_{\sim}$ is nonempty since $a \in [a]_{\sim}$. They are disjoint. Indeed, if $[a]_{\sim} \neq [b]_{\sim}$ are two distinct elements of $\mathscr{P}_{\sim}$, I claim they are disjoint. If $x \in [a]_{\sim} \cap [b]_{\sim}$, then $x \sim a$ and $x \sim b$, so $a \sim x$ by symmetry, and hence $a \sim b$ by transitivity. So if $y \sim a$, $y \sim b$, and if $y \sim b$, then $y \sim a$, so we have $[a]_{\sim} = [b]_{\sim}$. Finally, I claim that $\bigcup\limits_{[a]_{\sim} \in \mathscr{P}_{\sim}} [a]_{\sim} = S$. Clearly, $\bigcup\limits_{[a]_{\sim} \in \mathscr{P}_{\sim}} \subset S$ since $[a]_{\sim} \subset S$ for all $[a]_{\sim} \in \mathscr{P}_{\sim}$. Conversely, if $s \in S$, then $s \in [s]_{\sim} \in \mathscr{P}_{\sim}$, so $s \in \bigcup\limits_{[a]_{\sim} \in \mathscr{P}_{\sim}}$, giving the opposite inclusion. Therefore, the quotient of $S$ by $\sim$ gives a partition of $S$.

How does this look? In particular, have I used the terminology of quotienting by $\sim$ correctly?

Convergence In Distribution vs Probability Explanation

Posted: 09 Apr 2021 08:30 PM PDT

Having trouble following lecture notes:

Sequence of random variables, $ X_1, ..., X_n,...$ with corresponding c.d.f's $F_1,...,F_n,...$ converges in distribution to a random variable X with c.d.f $F$ if: $$ \lim _{n \rightarrow \infty} F_{n}(x)=F(x) $$ for all $x$ where $F$ is continuous.

To see that convergence in distribution in general does not imply convergence in probability let $X_1, X_2,...$ be a sequence of i.i.d. random variables all with cdf $F=\Phi$. Also let $X$ be a random variable that is independent of these and has the same standard normal distribution.
$$ X_{n} \stackrel{d}{\longrightarrow} X $$ but for all $ \epsilon \gt 0$, $$ \operatorname{Pr}\left(\left|X_{n}-X\right|<\varepsilon\right)=2 \Phi\left(\frac{\varepsilon}{\sqrt{2}}\right)-1 $$ and this does not converge to 1.

I'm very rusty in probability theory and rather confused by this. Where did the expression, $2 \Phi\left(\frac{\varepsilon}{\sqrt{2}}\right)-1$, come from? And why is he testing for convergence at 1? From previous notes, the convergence is explained to be 0: $ \lim _{n \rightarrow \infty} \operatorname{Pr}\left(\left|X_{n}-X\right|>\varepsilon\right)=0 $

$\{x^2 + y^2<1\}$ is an open set in $\mathbb{R}^3$?

Posted: 09 Apr 2021 08:13 PM PDT

I am asked to show that $x^2 + y^2<1$ is an open set in $\mathbb{R}^3$.

I'm not sure about this. I guess what was meant is $\{(x, y) \in \mathbb{R}^2 \mid x^2 + y^2 < 1\}$ is an open subset of $\mathbb{R}^2$

Eigenvalues of tensor products

Posted: 09 Apr 2021 08:03 PM PDT

Say I have two matrices $A$ and $B$, and I know both are diagonalisable, both are $n\times n$ matrices and can be written as say \begin{equation} A=\sum_i a_i|a_i\rangle\langle a_i| \end{equation} and \begin{equation} B=\sum_i b_i|b_i\rangle\langle b_i| \end{equation} Now imagine I have the following matrix: \begin{equation} A\otimes B+B\otimes A \end{equation} Is there anything which can be said in general about the eigenvalues of the $n^2\times n^2$ matrix in general? I would imagine there is some way of writing the eigenvalues of the joint matrix in terms of the known eigenvalues $a_i$ and $b_i$.

Logic doubt about the limit definition

Posted: 09 Apr 2021 08:30 PM PDT

I was thinking about the limit definition and I'm having an hard time understanding why it isn't formulated in terms of logical conjunction instead of implication, that is: let $f:A\subseteq \mathbb{R}\to\mathbb{R}$ and $x_0$ a limit point of $A$, why the definition of limit isn't formulated like $$\forall \epsilon>0\exists \delta_\epsilon >0 \ \text{s.t.} \forall x\in A, \ 0<|x-x_0|<\delta_\epsilon \land |f(x)-l|<\epsilon$$ Instead of the well known $$\forall \epsilon>0\exists \delta_\epsilon >0 \ \text{s.t.} \ \text{s.t.} \forall x\in A, 0<|x-x_0|<\delta_\epsilon \implies|f(x)-l|<\epsilon$$ I know that intuitively the implication describes a situation of cause effect, so when I imagine on how I would define the situation of limit I surely want that "the fact that I'm going near a limit point causes the fact that the function is near to its limit value" and so I get why there is the implication, but I can't understand why the logical conjunction can't describe the same situation since when I use $\land$ in the proposition "$0<|x-x_0|<\delta_\epsilon \land |f(x)-l|<\epsilon$" I want that, because of the definition of logical conjunction, both the condition $0<|x-x_0|<\delta_\epsilon$ and $|f(x)-l|<\epsilon$ are satisfied and so I still see a situation similar to the cause effect given by the implication and it still seems plausible to me defining limit this way. Can someone help me understand why this doesn't work? Thank you.

Linear transformations using custom coordinates and change of basis

Posted: 09 Apr 2021 08:04 PM PDT

I have the following question here:

In $\mathbb{R}^3$, let $L$ be the line spanned and oriented by $\mathbf{v}=(−3,2,4)$, and let $R$ be the rotation of $\mathbb{R}^3$ through the angle $\frac{\pi}{2}$ about the $\mathbf{v}$ oriented line $L$ according to the Right Hand Rule.

Further, let $P:\mathbb{R}^3\rightarrow\mathbb{R}^3$ be the orthogonal projection onto the orthogonal complement of the vector $n=(-2\sqrt{29}-32,2-7\sqrt{29},2\sqrt{29}-25)$.

Find the vector $\mathbf{z}$ which results from first rotating $\mathbf{x}=(2,7,−2)$ via $R$, and then projecting the rotated vector via $P$.

I know that I have to use custom coordinates here but I have no idea how to set up the matrices here. Can someone please help out here? I really want to understand the process behind how the transformations are set up here. I know how to use change of basis but not when the transformations are so strange.

Need help understanding diagram of punctured torus

Posted: 09 Apr 2021 07:54 PM PDT

enter image description here

The above diagram is a torus with an open triangle removed. I can understand the left side perfectly, where we identify the $a$ above with $a$ below, and identify $b$ above with $b$ below, then remove the triangle demarcated with $r$. But I don't understand the diagram on the right: why is $r$ sticking out to the boundary and what's the "hole" in the middle?

Problem: $\sigma$-algebras and $\sigma \mathcal{C}$

Posted: 09 Apr 2021 07:59 PM PDT

I have started to read about $\sigma$-algebras, p-systems, d-systems, and measurable spaces but am having some difficulty piecing things together.

The first exercise has already given me some trouble conceptually, hence this post. How should I think about this following?

Problem

Let $E$ be a set and let $\mathcal{C} = \{ A,B,C \}$ be a partition of $E$. List the elements of $\sigma \mathcal{C}$.

Ideas

$\sigma \mathcal{C}$ is the smallest $\sigma$-algebra that contains $\mathcal{C}$, so might this just be $\mathcal{E} = \{\mathcal{C}, \emptyset\}$? We have that $\mathcal{C} \cup \emptyset = \mathcal{C} \subseteq \mathcal{E}$ (closed under countable unions). We have $\mathcal{C} \in \mathcal{E} \implies E\setminus \mathcal{C} = \text{ ??? } \subseteq \mathcal{E}$.

What is the best strategy for the solitaire game Golf?

Posted: 09 Apr 2021 08:12 PM PDT

In Golf, 35 cards are dealt into seven face-up tableaux of five cards each, all of which are visible at once. Then one card is dealt face-up to form the discard pile, and the remaining 16 cards are put in a face-down stack ("the stack").

On each turn, the player places a card face-up on top of the discard pile subject to the following rules, listed in decreasing order of precedence. Call the topmost discard the "active card."

  1. The top card of the stack may be discarded to form a new active card, regardless of the rank of the current active card.
  2. No card may be discarded onto a King.
  3. Only a 2 may be discarded onto an Ace.
  4. A discard must be adjacent in rank to the active card. For example, if the active card is a 7, then only a 6 or an 8 may be discarded.

The object of the game is to discard all 52 cards.

What is the best strategy for this game, and what per cent of games are winnable with perfect strategy?

What really is $dr$ in differential forms?

Posted: 09 Apr 2021 08:23 PM PDT

I am reading Arnold's Mathematical Methods of Classical Mechanics, and have arrived at differential forms. At this point in the chapter, there has been no discussion of the exterior derivative. There is discussion of the differential, however. One of the problems defines a differential form $$\omega=r\;dr\wedge d\varphi$$ where $x_1=r\cos\varphi$ and $x_2=r\sin\varphi$; $x_1$ and $x_2$ are the standard coordinates in $\mathbb{R}^2$. So I can take differentials \begin{align*} dx_1\wedge dx_2 &=d(r\cos\varphi)\wedge d(r\sin\varphi)\\ &=(\cos\varphi\;dr-r\sin\varphi\;d\varphi)\wedge(\sin\varphi\;dr+r\cos\varphi\;d\varphi)\\ &=r\;dr\wedge d\varphi. \end{align*} But looking back, I realize that I don't really understand what I did at all.

First of all, when we say $x_1=r\cos\varphi$, for example, it seems like the $x_1$ we are talking about is the one that gives the coordinates on $\mathbb{R}^2$, which is our manifold. On the other hand, $dx_1\wedge dx_2$ is a k-form on the tangent space at a particular point of $\mathbb{R}^2$. So this substitution really doesn't even make sense to me at all; it feels like a type error. It seems perfectly plausible to me that our manifold can have polar coordinates while each tangent space has Cartesian coordinates.

Second, what even is $dr$? The only interpretation I have is that it is the differential of the function $r$. Intuitively it seems like it should be the function that, if the tangent space is parameterized using polar coordinates, retrieves the coordinate $r$. So if I were to evaluate the differential form $dr$ at the point $(0,1)$ (in $\mathbb{R}^2$) on the vector $(1,1)$ in the tangent space, would the result be $\sqrt{2}$?

I really seem to have confused a bunch of stuff here, so any help would be appreciated.

Let $(X,S,\mu)$ is a measure space and $\mu(X)<\infty$. Define $d(f,g)=\int\frac{|f-g|}{1+|f-g|}d\mu$ is a metric on the space of measurable functions

Posted: 09 Apr 2021 08:19 PM PDT

Let $(X,S,\mu)$ is a measure space and $\mu(X)<\infty$. Define $d(f,g)=\int\frac{|f-g|}{1+|f-g|}d\mu$ is a metric on the space of measurable functions.

My work-

symmetry,

let's show $d(f,g)=d(g,f)$

$d(f,g)=\int\frac{|f-g|}{1+|f-g|}d\mu$ and $d(g,f)=\int\frac{|g-f|}{1+|g-f|}d\mu$

It's obvious that, $d(f,g)=d(g,f)$

My concern is triangular inequality,

What want to show is, $d(f,h)\leq d(f,g)+d(g,h)$

so the right hand side becomes,

$\int\frac{|f-g|}{1+|f-g|}d\mu+\int\frac{|g-h|}{1+|g-h|}d\mu= \int\frac{|f-g|}{1+|f-g|}+\frac{|g-h|}{1+|g-h|}d\mu$

I don't think simplification of right side will give the result easily. Can someone give me a hints. Thank you in advance

Can we solve for (or approximate) $m$ from $ \frac{(2n-m)!}{(n-m)!} = c$?

Posted: 09 Apr 2021 07:56 PM PDT

I need to solve the following equation for $m$:

$$ \frac{(2n-m)!}{(n-m)!} = c $$

where $m$ and $n$ are natural numbers in the order of 10 with $n \ge m$ and $c$ is a real-valued constant (edit: it's a natural number actually, HT @Kaind). Any chance this can be solved/approximated analytically?

Show that $\langle 2,1+\sqrt{-5}\rangle \langle3,1+\sqrt{-5} \rangle = \langle1-\sqrt{-5} \rangle$

Posted: 09 Apr 2021 08:12 PM PDT

In $\mathbb Z[\sqrt{-5}]$ I compute $\langle 2,1+\sqrt{-5} \rangle \langle 3,1+\sqrt{-5} \rangle = \langle 6,2+2\sqrt{-5},3+3\sqrt{-5},-4+2\sqrt{-5} \rangle $

So there must be $a+b\sqrt{-5}\in\mathbb Z[\sqrt{-5}]$ that $(a+b\sqrt{-5})(1-\sqrt{-5})=2+2\sqrt{-5}$ but then we have $b=\frac{2}{3}$ that it's not possible. Am I doing it wrong? How can I prove this?

How far can the convergence of Taylor series be extended?

Posted: 09 Apr 2021 08:16 PM PDT

Taylor series can diverge and have only a limited radius of convergence, but it seems that often this divergence is more a result of summation being too narrow rather than the series actually diverging.

For instance, take $$\frac{1}{1-x} = \sum_{n=0}^\infty x^n$$ at $x=-1$. This series is considered to diverge, but smoothing the sum gives $\sum_{n=0}^\infty (-1)^n = \frac{1}{2}$, which agrees with $\frac{1}{1-(-1)}$. Similarly, $$\sum_{n=0}^\infty nx^n = x\frac{d}{dx}\frac{1}{1-x} = \frac{x}{(1-x)^2}$$ normally diverges at $x=-1$, but smoothing the sums gives $-\frac{1}{4}$ which agrees with the function. This process continues to hold for any number of taking the derivative than multiplying by x.

One can extend series of these types even further. Taking $$\frac{1}{1+x} =\sum_{n=0}^\infty (-x)^n = \sum_{n=0}^\infty (-1)^ne^{\ln(x)n} = $$ $$\sum_{n=0}^\infty (-1)^n\sum_{m=0}^\infty \frac{\left(\ln(x)n\right)^m}{m!} = \sum_{m=0}^\infty \sum_{n=0}^\infty (-1)^n\frac{\left(\ln(x)n\right)^m}{m!} = $$ $$\sum_{m=0}^\infty\frac{\ln(x)^m}{m!} \sum_{n=0}^\infty (-1)^n n^m = 1 -\sum_{m=0}^\infty\frac{\ln(x)^m}{m!} \eta(-m)$$

This sum converges for all values of $x>0$ and converges to $\frac{1}{1+x}$

In general, one can transform any sum of the form $$ \sum_{k=1}^\infty a_k x^k = -\sum_{k=0}^\infty \left(\sum_{n=-\infty}^\infty b_k x^k\right) (-1)^n x^n =$$ $$ -\sum_{k=0}^\infty b_k \sum_{m=0}^\infty\frac{\ln(x)^m}{m!}\eta(-(n+k)) $$

In general, how much is it possible to extend the range of convergence, simply by overloading the summation operation (here I assign values based on using the eta functions, but I can imagine using something like Abel regularization or other regularization)? Are there any interesting results that come from extending the range of convergence to the Taylor series?

One theory I had was that the Taylor series should converge for any alternating series which has a monotonic $|a_k|$. Is this true? Are there any series that are impossible to increase their radius of convergence by changing the sum operation?

Edit: I wanted to add that it might be useful in looking at this question to know that $$ \sum_{m=0}^\infty\frac{\ln(x)^m}{m!}\eta(-(m-w))= -Li_w(-x) $$ where $Li_w(-x)$ is the wth logarithmic integral. So the sum method I provided transforms a sum to an (in)finite series of logarithmic integrals.


This is sort of tangential, but I was reflecting a bit more on this problem, and it seems to be that a Taylor series has enough information to be able to extend a sum until the next real singularity. For instance, if I graph a function in the complex plane, then the radius of convergence is the distance to the nearest singularity. In this image, I have a few different singularities (shown as dots). enter image description here But, since the original Taylor series (call it $T_1(x)$, its shown as the large blue circle in the image) will exactly match the function within this disk, its possible to get another Taylor series (call it $T_2(x)$, its shown as the small blue circle) centered around another point while only relying on derivatives of $T_1(x)$. From $T_2(x)$, its possible to get $T_3(x)$ (the purple circle), which extends the area of convergence even further. I think this shows that the information contained by the Taylor series is enough to end all the way to a singularity on the real line, rather than being limited by the distance to a singularity in the complex plane. The only time this wouldn't work is if the set of points on the boundary circle where the function divergences is infinite and dense enough to not allow any circles to 'squeeze though' using the previous method. So in theory, it seems like any Taylor series which has a non-zero radius of convergence and a non-dense set of singularities uniquely determines a function that can be defined on the entire complex plane (minus singularities).

Edit 2: This is what I have so far for iterating the Taylor series. If the Taylor series for $f(x)$ is $$T_1(x)=\sum_{n=0}^\infty \frac{f^{(n)}}{n!}\left(0\right) x^n$$ and the radius of convergence is R, then we can center $T_2(x)$ around $x=\frac{9}{10}R$, since that is within the area of convergence. We get that $$T_2(x) = \sum_{n=0}^\infty \frac{T_1^{(n)}}{n!}\left(\frac{9}{10}R\right) x^n = $$ $$T_2(x) = \sum_{n=0}^\infty \left(\frac{\sum_{k=n}^\infty \frac{f^{(k)}\left(0\right)}{k!} \frac{k!}{(k-n)!} \left(\frac{9}{10}R\right)^{k-n}}{n!}\right) x^n = \sum_{n=0}^\infty \left(\sum_{k=n}^\infty f^{(k)}\left(0\right) \frac{1}{(k-n)!n!} \left(\frac{9}{10}R\right)^{k-n}\right) x^n $$ and in general centered around $x_{center}$ $$T_{w+1}(x) = \sum_{n=0}^\infty \frac{T_w^{(n)}\left(x_{center}\right)}{n!} x^n$$ I tested this out for ln(x), and it seemed to work well, but I suppose it could fail if taking the derivative of $T_w(x)$ too many times causes $T_w(x)$ to no longer match $f(x)$ closely.

I tested out this method of extending a function with Desmos, here is the link if you would like to test it out: https://www.desmos.com/calculator/fwvuasolla

A question based on the [x] (the greatest integer function)

Posted: 09 Apr 2021 08:28 PM PDT

What is the number of natural numbers less than 2009 , which can be expressed in the form of $[X\,[X]]$, where $X$ is a positive real number , and $[X]$ denotes the greatest integer less than or equal to $X$.

I tried to make a graph of the function , but it's of no help .

Integral including an Incomplete Gamma function

Posted: 09 Apr 2021 08:18 PM PDT

Does anyone have an idea of solving following integral $$ I =\int_{0}^{\infty}\frac{x\Gamma \left(a,b x\right)}{(x^2+1)}\,dx\,,$$ where $a,b>0$ are positive real values?

Mathematica gives an answer, but I do not know how it can be derived by using integral tables.

\begin{align} I=&\frac{1}{{2 a (a+1)}}\biggl(\pi (a+1) b^a \csc \left(\frac{\pi a}{2}\right) \, _1F_2\left(\frac{a}{2};\frac{1}{2},\frac{a}{2}+1;-\frac{b^2}{4}\right) \\ &+a b \left((a+1) b \Gamma (a-2) \, _2F_3\left(1,1;2,\frac{3}{2}-\frac{a}{2},2-\frac{a}{2};-\frac{b^2}{4}\right)-\pi b^a \sec \left(\frac{\pi a}{2}\right) \, _1F_2\left(\frac{a}{2}+\frac{1}{2};\frac{3}{2},\frac{a}{2}+\frac{3}{2};-\frac{b^2}{4}\right)\right)+2 \Gamma (a+2) (\psi ^{(0)}(a)-\log (b))\biggr) \end{align}

Eq. 2.10.1.3 in Integrals and Series [Vol 2 - Spl Functions] - A. Prudnikov would be one of the closet answer, but I could not match parameters well.

Solution of the linear differential equation: $\frac {dy}{dx} + P(x) \cdot y=Q(x)$. What is the error in this approach?

Posted: 09 Apr 2021 07:59 PM PDT

Derive the solution of the linear differential equation: $\frac {dy}{dx} + P(x) \cdot y=Q(x)$

Rewriting the given differential equation, we obtain: $(Py-Q) dx+1 \cdot dy=0$.

Let $M=Py-Q, N=1$. Then : $\dfrac {\partial M }{\partial y}=M_y=P$

and $\dfrac {\partial N}{\partial x}=N_x=0$.

Thus $\dfrac{M_y-N_x}{N}=P(x)$. Thus, the integrating factor is $I.F= e^{\int P dx}$. Therefore $e^{\int P dx}(Py-Q) dx+e^{\int P dx} \cdot dy=0$ is an exact differential equation.

The solution of this exact differential equation is $\int_{\text {treat y as constant} } M dx + \int \text{terms in N not containing x}~~ dy= $ constant

$\implies \int e^{\int P dx}(Py-Q)~ dx + 0=c$

$\implies y \int P~ e^{\int P dx}~dx = \int e^{\int P dx} Q ~dx + c.~$ But the solution of the differential equation in almost every textbook is given as $\implies y e^{\int P dx}~dx = \int e^{\int P dx} Q ~dx + c$

What is the error in the above steps. Thanks a lot for your help.

Does the series converges uniformly on $\mathbb{R}$?

Posted: 09 Apr 2021 08:13 PM PDT

Verify the following series converges uniformly on $\mathbb{R}$ $$\sum_{n=1}^{\infty}(-1)^n\frac{n}{n^2+x^2}$$
$\forall M>0,$ we have that the series is converges uniformly on $[-M,M]$ by using The Dirichlet test for uniform convergence. How can I prove it converges uniformly on $\mathbb{R}$?

Can we draw the graph of the derivative/integral of a function by using the graph of the function only?

Posted: 09 Apr 2021 08:03 PM PDT

Consider a function say $F(x) = x^2 + 5\sin x$ then we have it's derivative as $F'(x) = 2x + 5\cos x$ and thus we have the graph of $F'(x)$ quiet easily but can we plot a graph using only the graph of $F(x)$ only? Since derivative signifies the slope of a curve at any point does it help us trace it's graph taking only the help of graph of $F(x)$?

At first I was wondering about the graph of the derivative then I wondered similarly what about $\int F(x) \, dx$ since it's the anti-derivative can we get its graph just from the graph of $F(x)$?

No comments:

Post a Comment