Friday, February 4, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Martingale is Regular

Posted: 04 Feb 2022 08:00 AM PST

Let $(Z_n)_{n\ge0}$ be a sequence of i.i.d. r.v.'s with $P(Z_1=1)=P(Z_1=-1)=\frac{1}{2}$. Let $S_0=0$ and $S_n=Z_1+\dots+Z_n$. Let $\mathcal{F}_0=\{\emptyset,\Omega\}$ and $\mathcal{F}_n=\sigma(Z_1,\dots,Z_n)$. Let $a>0$, $0<\lambda<\frac{\pi}{2a}$, and $T=\inf\{n\ge1:|S_n|=a\}$.

So far, I have shown that $$ X_n=(\cos{\lambda})^{-n}\cos{(\lambda S_n)} $$ is a martingale and that $T$ is almost surely finite. Now, I must further prove that $(X_{n\wedge T})_{n\ge0}$ is a regular martingale, but I don't see how.

Could someone give me a hint?

How to solve the optimization problem having this form

Posted: 04 Feb 2022 07:59 AM PST

Consider the following optimization problem : $$ \operatorname{minimize}_{x,y}\quad ax^{2}-bxy-cx=0 $$ $$ \operatorname{subject to} $$ $$ x\geq1, y\geq5 $$ where $a$, $b$ and $c$ are scalars. This problem arises as I was modelling a transistor's minimal dimensions $x$ and $y$ for it to operate in the saturation region and behaves as an audio amplifier

Unfortunately I don't see this quadratic equation as a convex. Therefore, I have no idea what might be a good approach for this problem. What I did essentially is assume $L=1$ and compute $W$ directly.

Finding the nature of the initial distribution, given that the intermediate distributions and the final distribution is uniform.

Posted: 04 Feb 2022 07:58 AM PST

The problem is as follows :

Fix $n \geq 4$. Suppose there is a particle that moves randomly on the number line, but never leaves the set {$1, 2, . . . , n$}. Let the initial probability distribution of the particle be denoted by $\overrightarrow{π}$. In the first step, if the particle is at position $i$, it moves to one of the positions in {$1, 2, . . . , i$} with uniform distribution; in the second step, if the particle is in location $j$, then it moves to one of the locations in {$j, j + 1, . .> . n$} with uniform distribution. Suppose after two steps, the final distribution of the particle is uniform. What is the initial distribution $\overrightarrow{π}$?

The options are :

(a) $\overrightarrow{π}$ is not unique $\\$ (b) $\overrightarrow{π}$ is uniform (c) $\overrightarrow{π}$(i) is non-zero for all even i and zero otherwise. (d) $\overrightarrow{π}$(1) = 1 and $\overrightarrow{π}$(i) = 0 for $ i \neq$ 1$ (e) > \overrightarrow{π}$($n$) = 1 and $\overrightarrow{π}(i) = 0$ for $i \neq$ $n$.

I'd like to know how to find the initial distribution given the information of the intermediate and final distributions? What is the topic do I need to learn to solve such problems ?

Finding the conjugacy classes of $\{ (x,y) \in S_n\times S_n : x\equiv y \bmod A_n \}$

Posted: 04 Feb 2022 07:58 AM PST

I solved the following question: given the subgroup of $S_3\times S_3$

$$G_3 := \left\{ (x,y) \in S_3\times S_3 : x\equiv y \bmod A_3 \right\},$$

find all it's conjugacy classes and their corresponding orders. I proceeded like this, using, where necessary, the fact that conjugation preserves cycle type:

  1. $G_3$ is the set of pairs of permutations of the same type: either both are even or both are odd. This allow us to write

$$G_3 = [(12)A_3\times (12)A_3]\sqcup[A_3\times A_3],$$

where $\sqcup$ denotes disjoint union.

  1. With that, we can list the conjugacy classes:

    • we have the class of $(1,1)$, with only 1 element;

    • we have the class of elements of the type $((**),(\star\star))$, i.e., pairs of transpositions, which has 9 elements since only $1$ and the own pair of transpositions stabilizes this kind of element;

    • we have the class of the elements of the type $(1,(***))$, which has only 2 elements since every conjugate of 1 is 1, the conjugation of a 3-cycle by a transposition gives the other 3-cycle (which is the inverse of the original) and the conjugation of a 3-cycle by a 3-cycle is the original 3-cycle;

    • we have the classes of the elements of the type $((***),1)$, $((***),(***))$ and $((***),(\star\star\star))$ with 2 elements each for the same reasons as presented on the previous item.

My question is wether this approach can be generalized for $S_n\times S_n$. Namely, if

$$G_n = \left\{ (x,y) \in S_n\times S_n : x\equiv y \bmod A_n \right\},$$

can we follow the same line as with $n=3$ to find all the conjugacy classes of $G_n$?

What I (think) I have found so far is that we can still write

$$[(12)A_n\times (12)A_n]\sqcup[A_n\times A_n]$$

since we are still thinking of pairs of permutations of the same parity. But how would the previous method transport for this case?

PS: by $x\equiv y\bmod A_n$ I want to mean $xy^{-1} \in A_n$.

PSS: this is my first posted question, so I apologize in advance for any mistakes or inadequate formatting. :-)

Can an n-th degree multivariable polynomial be equal to a higher degree taylor polinomial?

Posted: 04 Feb 2022 07:57 AM PST

Recently, I stumbled across the question:

Consider $p(x,y)$ a 5th degree polynomial. Find the correct statement:

  1. $p(x,y)$ is equal in $\mathbb{R}^2$ to its 8th degree Taylor polynimial in the point (1,2)

  2. 1 is incorrect but $p(x,y)$ is equal in $\mathbb{R}^2$ to the 5th degree Taylor polynomial in the point (1,2)

Intuitivly, given a n-th degree polynomial you cannot give a higher than 5th degree taylor polynomial as you'd need higher than n-th order non zero partials which are imposible to get as for every partial you take you lower the degree of $p(x,y)$ by one. Therefore when you get to the n-th order partial you end up with a constant and you can't take any more non zero partials.

However, the answer to the question above seems to be 1. and i don't understand how you could be able to obtain the non zero coeficients for the higher than n-th degree monomials

Lower bound on the probability of a sum of iid random variables larger than a threshold

Posted: 04 Feb 2022 07:53 AM PST

Suppose you are given $n$ iid random variables $X_1,\ldots,X_n$ each taking values in $\{1,2,3,\ldots\}$. Each $X_i$ has the cumulative distribution function $P(X_i \le v) = 1 - \frac{1}{(v+1)^p}$ for some $p \ge 3$. I am interested in lower bounding the probability $P(X_1 + X_2 + \ldots + X_n \ge t)$. In particular, is it possible to get a constant lower bound on the probability for $t = n^{\gamma}$ for some $\gamma > 1$. In case this is not possible, I would also appreciate a counter-example e.g. for $p=3$.

If $ \lim_{x\to\infty} \frac{f(x)}{x} = 1$, then $\exists x_n \rightarrow \infty$ such that $\lim_{n\to\infty} f'(x_n) = 1$.

Posted: 04 Feb 2022 08:02 AM PST

Let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be a differentiable function. If $$ \lim_{x\to\infty} \frac{f(x)}{x} = 1, $$ then there exists a sequence $(x_n)$ such that $x_n \rightarrow \infty$ as $n \rightarrow \infty$ and $$ \lim_{n\to\infty} f'(x_n) = 1. $$

We tried to use the Mean Value Theorem. For each $n$, there exists $x_n \in [0,n]$ such that $$ f'(x_n) = \frac{f(n) - f(0)}{n}. $$ Hence, $\lim_{n\rightarrow \infty} f'(x_n)=1$. But, we are not sure that $x_n \rightarrow \infty$.

Thank you for any hint.

Numerical simulation of SDE's driven by Lévy processes (particularly stable processes)

Posted: 04 Feb 2022 07:54 AM PST

I'm trying to learn how to numerically simulate SDE's of the form $$ dX(t) = f(t,X)dt + g(t,X)dZ(t) $$ Where Z(t) is a Lévy process with triplet $(a,0,\nu)$.

My question: what is currently considered the standard approach to numerically simulate equations like these?

I'm particularly interested in the case of stable processes, where

$$ \nu(dz) = \left[ c_{-}|z|^{-\alpha - 1}\mathbb{I}_{z<0} + c_{+}|z|^{-\alpha - 1}\mathbb{I}_{z>0} \right]dz $$

From what I have read, I saw two possible "easier" techniques:

  1. Euler scheme: $$ X_{t+\delta t} - X_t = f(t,X_t) \delta t + g(t,X_t) (Z_{t+\delta t} - Z_t) $$

  2. Through Lévy-Itô decomposition, using truncation of small jumps + diffusion correction: $$ X(t) = \left[f(t,X) - g(t,X)\left(a - \int_{\epsilon \leq |z| < 1} z \nu(dz) \right) \right] dt + g(t,X)\left( \int_{0 < |z| \leq \epsilon}z^2\nu(dz) \right) dW(t) + \int_{|z|>\epsilon}g(t,X_{-})N_{\epsilon}(dz,dt) $$ Where $N_{\epsilon}$ is the compound Poisson process truncation of jumps at an $\epsilon \in (0,1)$. Afterwards, apply the Euler scheme to the previous truncated version of the SDE, simulating $N_{\epsilon}$ with activation $\lambda_{\epsilon} = \nu( (\epsilon, \infty))$ and jump sizes with distribution $U = \int_{-\infty}^{z} \nu(dz)/\lambda_{\epsilon}$.

I have already prepared code to use 2, because I already had code for simulating Brownian motion and compound Poisson process; however, I read in this doctoral thesis (Numerical Approximation of Stochastic Differential Equations Driven by Levy Motion with Infinitely Many Jumps) that the author was not able to prove this scheme has strong convergence for Lévy processes with infinite second moment (like stable processes); for me this is worrying. Additionally, the diffusive correction is only valid for some processes (not Gamma processes, for example) and it is inherently symmetric and cannot express possible asymmetry in the small jumps.

Regarding scheme 1 (some remark about its convergence properties on THE EULER SCHEME FOR LÉVY DRIVEN STOCHASTIC DIFFERENTIAL EQUATIONS: LIMIT THEOREMS), it seems easier to implement, but the integrating interface I use does not contemplate custom increment distributions, I would have to develop it all from scratch in a way that works with everything else.

Alternatively, I have found Simulation of non-Lipschitz stochastic differential equations driven by α-stable noise: a method based on deterministic homogenisation, but I am currently absolutely ignorant about the Marcus integral and I would rather simulate Itô integrals because I already have a lot of code for it.

Find all functions $f:\mathbb R \rightarrow \mathbb R$ that have following two properties

Posted: 04 Feb 2022 07:53 AM PST

Find all functions $f:\mathbb R \rightarrow \mathbb R$ that have following two properties

(i) $f(f(x))=x$ $\;$ $\forall \in \mathbb R$

(ii) $x \geq y$ then $f(x)\geq f(y)$

My Approach:

$f(f(x))=x$ $\implies$ $f(x)$ is one-one and onto.

So let $f(x)=t$

From $(i)$ property

$f(y)=t$

$\implies$

let $t\neq x$

Case $(1)$ $x<t$

$\implies$

$f(x)\leq f(t)$

$\implies$

$t\leq x$

Hence Contradiction.

Case $(2)$ $t<x$

$\implies$

$f(t)\leq f(x)$

$\implies$

$x\leq t$

Hence again contradiction

From Case $(1)$ and Case $(2)$

$x=t$

$\implies$

$f(x)=x$ $\;$ $\forall x \in \mathbb R$

My doubt: Is my conclusion $f(x)$ is bijective correct just by seeing $f(f(x))=x$?

Am i missing anything?

Other way to solve this Problem is also appreciated

Convergence of $\int_0^\infty (\frac{ln(1+x^2)}{(x+a)^2})$

Posted: 04 Feb 2022 07:59 AM PST

I thought about using majorant criterion since $(x+a)^2\gt x^2$ for positive $a$ but that didn't lead anywhere.

For negative $a$ function is discontinuous at $x=-a$ so

$\int_0^{-a} (\frac{ln(1+x^2)}{(x+a)^2})$ $+$ $\int_{-a}^\infty (\frac{ln(1+x^2)}{(x+a)^2})$

I thought about using limit criterion on the second one since integrad is positive but that also didn't work out .

Any hints?

Is this set neither open nor closed?

Posted: 04 Feb 2022 07:44 AM PST

Let $B=\{x+iy\in \mathbb{C} | x,y\in \mathbb{Q}\}$. I think that this set is neither closed nor open. Am I right?

Constructing $A(\neq I_n) \in M_{n}(\mathbb{Q})$ with $A^{n+1} = I_n$

Posted: 04 Feb 2022 07:59 AM PST

I want to find $A \in M_{n}(\mathbb{Q})$ not equal to identity matrix such that $A^{n+1} = I_n$.


From my previous post, $A \in M_{n\times n}(\mathbb{Q})$ with $A\neq I_n$, $A^p=I_n$ then $p\leq n+1$.

Followings are my trials:

Note that the matrix equation gives $t^{n+1} - 1$, but since $t^{n+1} -1 = (t-1) (1+t+t^2+ \cdots + t^{n-1})$. Note that minimal polynomial divides this and since $m_A(t) \neq t-1$, it divides $(1+t+\cdots + t^{n-1})$. But this can be reducible ...

Have I determined the inverse element for this group correctly?

Posted: 04 Feb 2022 07:40 AM PST

The operation $\circ $ has been defined on the set $S=\{ (a,b)|a, b\in \mathbb{R} \}$ in the following manner: $(a,b)\circ (c,d)=(a+c+1,b+d)$. Show that $(S,\circ)$ is a group.

I've already finished showing closure, associativity and identity, but I'm not sure if the inverse element is correct. More specifically, is $(a,b)^{-1}$ in this case, the same as $(a^{-1},b^{-1})$?

Here's what I've tried doing so far:

Let $(a,b)\in S$, then there exists an $(a,b)^{-1}\in S$ such that:

$(a,b)\circ (a,b)^{-1}=(a,b)^{-1}\circ (a,b)=(-1,0)$ (the identity element)

From here, we conclude that:

$a^{-1}=-a-2\\ b^{-1}=b$

And therefore the inverse element of $S$ is $(-a-2,-b)$.

Limit problem with uniformly continuous function

Posted: 04 Feb 2022 07:51 AM PST

Let $f:[0,+\infty)\to \mathbb{R} $ be uniformly continuous and $\lim_{n\to \infty} f(x+n)=0$ ($n \in \mathbb{N}$) for all $x \in [0,+\infty)$. Prove that $$\lim_{x\to \infty} f(x)=0$$

I tried some manipulations with definitions but I haven't got anything.

Finding extreme values of $ f :\mathbb R^2\rightarrow \mathbb R$, when the determinant $\Delta = AC - B^2 = 0$.

Posted: 04 Feb 2022 08:02 AM PST

We generally rely of the result:

[Let $ f$ be a real valued function from $\mathbb R^2$ with continuous partial derivatives at a stationary point $\vec a$ in $\mathbb R^2$. Let

$A = D_{11}f(\vec a)$,

$ B = D_{12}f(\vec a) = D_{21}f(\vec a) $

$C = D_{22}f(\vec a)$,

and let $\Delta = AC - B^2$ . Then,

(a) If $\Delta>0 $ and $A>0$, $f$ has a relative minimum at $\vec a$

(b) If $\Delta>0 $ and $A<0$, $f$ has a relative maximum at $\vec a$

(c) If $\Delta<0 $ , $f$ has a saddle point at $\vec a$ ]

When $\Delta = 0, f$ can have a local minimum of a local maximum or a saddle point at $\vec a$.

What are the other ways to proceed when we have $\Delta = 0$ or $A = 0$?

Proving a map of the unit disk fixing the boundary is a homeomorphism

Posted: 04 Feb 2022 07:45 AM PST

The question is posed in A homeomorphism of B^n fixing the boundary? of constructing a homeomorphism $h$ of the closed unit disk $D^n$ that maps the origin $0$ to a given point $p$ in the open ball $B^n$ while preserving the bounding sphere $S^{n-1}$.

The construction proffered in https://math.stackexchange.com/a/1517119/32337 only describes the map $h$ indirectly. If I correctly understand the description, it amounts to defining $h( 0) = p$ and, for $z \neq p$, defining $h(z) = (1 - \|{z}\|) p + \|{z}\| (1/\|{z}\|) z$, that is, $h(z) = (1 - \|{z}\|) p + z$. Geometrically, as in the answer cited above, this linearly maps the radial segment from $0$ to the boundary point $(1/\|z\|) z$ onto the line segment from $p$ to that same boundary point.

It is clear that $h$, so defined, is injective, continuous, maps $D^n$ into itself, and preserves the boundary points.

Question: how to prove that $h^{-1}$ is also continuous.

I belive that $h^{-1}$ is obtained as follows. First, set $\zeta( p) = 0$ and, for $w \neq 0$, set $\zeta(w) = (1 - s) p + s w$ where $s$ is the greater of the two real roots of the quadratic equation $\|{(1 - s) p + s w}\|^2 = 1$. Then $h^{-1}(w) = \|{w - p}\|/\|{\zeta(w) -p}\|$. Geometrically, $h^{-1}(w)$ is obtained by extending the directed line segment from $p$ to $w$ until it hits the boundary $S^{n-1}$ and then mapping the segment from $p$ to that boundary point linearly onto the line segment from $0$ to that same boundary point.

It is easy to see that $h^{-1}$, so defined, is continuous at all points $w \neq 0$.

But how does one show directly that $h^{-1}$ is also continuous at the point $p$?

Why do optimal control and reinforcement learning use different notation?

Posted: 04 Feb 2022 07:57 AM PST

In optimal control, state is $x$, control is $u$, and dynamics are $\dot{x}=f(x,u)$. In reinforcement learning, state is $s$, action is $a$, and dynamics are $s'\sim P(s'|s,a)$.

I'm curious why these two fields that are so similar use such different notation? I've heard that one reason is that optimal control has roots in Russian, where $u$ is the beginning of a relevant word (s for state, a for action, etc...) but I haven't been able to find any paper or textbook that describes the reason for the notational divergence. If you have an answer with a source that would be most helpful!

What are the eigenvalues and eigenvectors of the endomorphism: $T(P) = (X + 1)(X -3)P' - 2XP$

Posted: 04 Feb 2022 07:41 AM PST

Let $T$ be an endomorphism on $\mathbb{R}_2[X]$ defined as: $$T(P) = (X + 1)(X -3)P' - 2XP$$

$T (P) = \alpha P \iff P \text{ is a solution of} \space (\alpha + 2X)P - (X + 1)(X -3)P' = 0 $

  1. Solve the differential equation, and find the condition on $\alpha$ to have a polynomial solution.
  1. Determine the eigenvalues and the eigenvectors of $T$.

  1. Solving the differential equation, we have for $ x \in \mathbb{R}\setminus \{-1, 3\} $:

$$ \frac{P'(x)}{P(x)} = \frac{\alpha + 2x}{(x + 1)(x - 3)} = -\frac{\alpha + 2x}{4(x + 1)} + \frac{\alpha + 2x}{4(x + 3)} $$

Computing the antiderivatives, I found:

$$\begin{align} & \implies \ln(|P(x)|) + C_1 = \ln|x + 1|^{-(\frac{\alpha}{4} + \frac{1}{2})}|x - 3|^{\frac{\alpha}{4} + \frac{3}{2}} + C_2 \\ & \implies P(x) = C|x + 1|^{-(\frac{\alpha}{4} + \frac{1}{2})}|x - 3|^{\frac{\alpha}{4} + \frac{3}{2}} \end{align}$$

With: $C = e^{C_2 - C_1}$

For the solutions to be polynomial we need to have: $ \frac{\alpha}{4} + \frac{1}{2} \leq 0 \text{ and } \frac{\alpha}{4} + \frac{3}{2} \geq 0 \iff -6 \leq \alpha \leq -2 $

Although, $\alpha$ is an integer, when we test the numbers from $-6$ to $-2$, we found that the only values of $\alpha$ that give a polynomial solution are $-2$ and $-6$, and they are the eigenvalues of $T$, and the eigenvectors are:

$E_{-2} = \{ C(x - 3) | C \in \mathbb{R} \}$

$E_{-6} = \{ C(x + 1) | C \in \mathbb{R} \}$

Others found different results than mine, was this correct? Thank you.

Convergence of Puiseux series in an topological algebraically closed field

Posted: 04 Feb 2022 07:51 AM PST

The classical Puiseux theorem states that the field of the Puiseux series over an algebraically closed field of characteristic $0$ $F$ is also an algebraically closed field. Hence for each point of a curve given by a polynomial $f\in F[x,y]$ the polynomial can be written formally as $f(x,y) = A(x)(y-f_1(x))\dots(y-f_k(x))$, where $A,f_1,\dots,f_k$ are formally Puiseux series over $F$. In the complex case, i.e. $F=\mathbb{C}$, it is also known that around this particular point, these Puiseux series converge with a positive radius, i.e. there's a $\delta>0$ such that for all $f_i$ are defined in $\{|x-x_0|<\delta\}$ as a function and for all $y\in\mathbb{C}$, $f(x,y) = A(x)(y-f_1(x))\dots(y-f_k(x))$, see for instance II§6 of Introduction to Complex Analytic Geometry by S. Lojasiewicz.

Is there any reference for an extension of the convergence part to topological fields? I.e. given an algebraically closed field $F$ with characteristic $0$, which topologies on $F$ would ensure that, in an infinite neighborhood of a point $(x_0,y_0)$ of the curve, the

Central Limit Theorem and Convergence

Posted: 04 Feb 2022 08:00 AM PST

I'm trying to approach the following question:

Let $\{X_i\}_{i=1} ^\infty$ be a sequence of identically distributed random variables, with $\mathbb{E}(X_i)=0$ and finite variance $\sigma^2$.

For all $i,j\in\mathbb{N}$ s.t. $i+2\le j$, $X_i,X_j$ are independent.

Let $Z\sim N(0,\sigma^2)$.

I'm trying to understand why can't I use the CLT here to say that by separating $\{X_i\}_{i=1} ^\infty$ to $\{X_{2i}\}_{i=1} ^\infty$ and $\{X_{2i+1}\}_{i=1} ^\infty$, we can conclude that

$\frac{1}{\sqrt n}\sum_{i=1}^{n}X_i\overset{\text d}{\to}Z$

Integral involving product of dilogarithm and an exponential

Posted: 04 Feb 2022 07:43 AM PST

I am interested in the integral \begin{equation} \int_0^1 \mathrm{Li}_2 (u) e^{-a^2 u} d u , ~~~~ (\ast) \end{equation} where $\mathrm{Li}_2$ is the dilogarithm. This integral arose in my attempt to evaluate the double integral \begin{equation} I (a,b) = \int_{\mathbb{R}^2} \arctan^2{ \left( \frac{y+b}{x+a} \right) } e^{- (x^2 + y^2) } d x dy , \end{equation} where $a,b \in \mathbb{R}$. See, for example, this question: Interesting $\arctan$ integral. In fact, I am able to show that \begin{equation} I (a,0) = \frac{\pi}{2} \left( \frac{\pi^2}{6} e^{-a^2} + a^2 \int_0^1 \mathrm{Li}_2 (u) e^{-a^2 u} d u \right) . \end{equation} I would very much like to see the trailing term (i.e., ($\ast$)) evaluated in terms of commonly used special functions. I am have not worked with the dilogarithm very much, so I am not sure what might be the best approach to rewrite this. Does anyone here have any suggestions?

Prove that $\sum \frac{a^3}{a^2+b^2}\le \frac12 \sum \frac{b^2}{a}$

Posted: 04 Feb 2022 07:51 AM PST

Let $a,b,c>0$. Prove that $$ \frac{a^3}{a^2+b^2}+\frac{b^3}{b^2+c^2}+\frac{c^3}{c^2+a^2}\le \frac12 \left(\frac{b^2}{a}+\frac{c^2}{b}+\frac{a^2}{c}\right).\tag{1}$$

A idea is to cancel the denominators, but in this case Muirhead don't work because the inequality is only cyclic, not symmetric.

Another idea would be to apply Cauchy reverse technique: $$(1)\iff\sum \left(a-\frac{a^3}{a^2+b^2}\right)\ge \frac12 \sum (2a-b^2/a)\iff \sum\frac{ab^2}{a^2+b^2}\ge \frac12\sum\frac{2a^2-b^2}{a}$$ $$\iff \sum \frac{(ab)^2}{a^3+ab^2}\ge \frac12\sum \frac{2a^2-b^2}a.$$ Now we can apply Cauchy-Schwarz, and the problem reduces to $$\frac{(\sum ab)^2}{\sum a^3+\sum ab^2}\ge \frac12\sum \frac{2a^2-b^2}{a},$$ and at this point I am stuck. Here the only idea is to cancel the denominators, but as I say above it can't work.

General expressions for $\mathcal{L}(n)=\int_{0}^{\infty}\operatorname{Ci}(x)^n\text{d}x$

Posted: 04 Feb 2022 07:51 AM PST

Define $$\operatorname{Ci}(x)=-\int_{x}^{ \infty} \frac{\cos(y)}{y}\text{d}y.$$ It is easy to show $$ \mathcal{L}(1)=\int_{0}^{\infty}\operatorname{Ci}(x)\text{d}x=0 $$ and $$\mathcal{L}(2)=\int_{0}^{\infty}\operatorname{Ci}(x)^2\text{d}x =\frac{\pi}{2}.$$ $\mathcal{L}(3),\mathcal{L}(4)$ is a little bit non-trivial. We have two claims(take a look here to find more details): $$\begin{aligned} &\mathcal{L}(3)=-\frac{3\pi}{2}\ln2 \\ &\mathcal{L}(4)=3\pi\operatorname{Li}_2 \left ( \frac{2}{3} \right )+\frac{3\pi}{2}\ln^23 \end{aligned}$$ Where $\operatorname{Li}$ are polylogarithms, they are defined by $\displaystyle{\operatorname{Li}_n(z) =\sum_{k=1}^{\infty} \frac{z^k}{k^n}}$ for $|z|<1$.
$\mathcal{L}(5)$ is much more non-trivial. We have $$ \mathcal{L}(5)=\int_{0}^{\infty}\operatorname{Ci}(x)^5\text{d}x =-\frac{15\pi^3}{8}\ln(2)-\frac{15\pi}{2}\ln(2)^3 -\frac{45\pi}{4}\operatorname{Li}_2\left ( \frac{1}{4} \right )\ln(2) -\frac{45\pi}{4}\operatorname{Li}_3\left ( \frac{1}{4} \right ) -\frac{15\pi}{16}\zeta(3). $$ Where $\zeta(n)=\operatorname{Li}_n(1)$ for $\Re(n)>1$.
My question: How can we find alternate generalizations? I believe that $\mathcal{L}(6)$ can be expressed by using ordinary polylogarithms($\mathcal{L}(7)$ seems impossible). We can also find the closed-forms of following integrals: $$\int_{0}^{\infty}\operatorname{Ci}(x)^4\cos(x)\text{d}x,\int_{0}^{\infty}\operatorname{Ci}(x)^2\frac{\operatorname{Si}(2x)}{x} \cos(x)\text{d}x$$ where $\displaystyle{\operatorname{Si}(x)=\int_{0}^{x} \frac{\sin(t)}{t}\text{d}t}.$


Update.1: Define $\operatorname{si}(x)+\operatorname{Si}(x)=\frac{\pi}{2}$. Here are some results: $$\begin{aligned} &\int_{0}^{\infty}\operatorname{si}(x)\text{d}x=1\\ &\int_{0}^{\infty}\operatorname{si}(x)^2\text{d}x=\frac{\pi}{2}\\ &\int_{0}^{\infty}\operatorname{si}(x)^3\text{d}x=\frac{\pi^2}{4} -\frac{3}{2}\ln^22-\frac{3}{4} \operatorname{Li}_2\left ( \frac{1}{4} \right )\\ &\int_{0}^{\infty}\operatorname{si}(x)^4\text{d}x= \frac{\pi^3}{4} -3\pi\ln^22-\frac{3\pi}{2} \operatorname{Li}_2\left ( \frac{1}{4} \right ) \end{aligned}$$


Update.2: A useful fourier transform $$\int_{0}^{\infty}\operatorname{Ci}(x)^3\cos(a x)\text{d}x =\begin{cases} \frac{\pi \text{Li}_2\left(\frac{1-a}{3}\right)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{a-1}{a-2}\right)}{2 a}+\frac{\pi \text{Li}_2\left(\frac{a+1}{3 (a-1)}\right)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{a-1}{a+1}\right)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a+1}{a+2}\right)}{2 a}-\frac{\pi \text{Li}_2\left(\frac{a+1}{3}\right)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a+1}{a-1}\right)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a-1}{3 (a+1)}\right)}{4 a}+\frac{\pi \log ^2(2-a)}{4 a}-\frac{\pi \log ^2(a+2)}{4 a}+\frac{\pi \log (3) \log (a-1)}{4 a}+\frac{\pi \log (3) \log \left(\frac{a+2}{a+1}\right)}{4 a}-\frac{\pi \log (3) \log (a-2)}{4 a}-\frac{\pi \log (3) \tanh ^{-1}\left(\frac{a}{2}\right)}{2 a} & (0\le a\le1),\\ \frac{\pi \text{Li}_2\left(\frac{a^2}{a^2-1}\right)}{4 a}+\frac{\pi \log \left(-\frac{a}{a+1}\right) \log \left(\frac{1}{1-a^2}\right)}{4 a}+\frac{\pi \text{Li}_2\left(-\frac{a}{2}\right)}{2 a}+\frac{\pi \text{Li}_2(1-a)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{a+2}{2 (1-a)}\right)}{4 a}+\frac{\pi \text{Li}_2\left(-\frac{3}{a-1}\right)}{4 a}+\frac{\pi \text{Li}_2\left(-\frac{1}{a}\right)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{a+2}{2 (a+1)}\right)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{a (a+2)}{(a+1)^2}\right)}{4 a}-\frac{\pi \text{Li}_2\left(-\frac{1}{2}\right)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{1}{1-a}\right)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a}{a-1}\right)}{4 a}-\frac{\pi \text{Li}_2\left(-\frac{1}{a-1}\right)}{4 a}-\frac{\pi \text{Li}_2(-a)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{1}{a+1}\right)}{4 a}-\frac{3 \pi \text{Li}_2\left(\frac{a}{a+1}\right)}{4 a}-\frac{7 \pi ^3}{24 a}+\frac{3 \pi \log ^2(2)}{8 a}+\frac{\pi \log ^2(a)}{8 a}-\frac{\pi \log ^2(a+1)}{2 a}+\frac{\pi \log (2) \log (a-1)}{4 a}+\frac{\pi \log (2) \log (a+1)}{4 a}-\frac{\pi \log (2) \log (a)}{2 a}-\frac{\pi \log (2) \log (a+2)}{2 a}+\frac{\pi \log \left(\frac{a+2}{a+1}\right) \log \left(\frac{1}{(a+1)^2}\right)}{4 a}+\frac{\pi \log \left(-\frac{1}{a+1}\right) \log \left(\frac{a (a+2)}{(a+1)^2}\right)}{4 a}+\frac{\pi \log (3) \log (a+2)}{4 a}+\frac{\pi \log (a) \log (a+2)}{2 a}-\frac{\pi \log (a) \log (a+1)}{2 a}-\frac{i \pi ^2 \log \left(\frac{1}{1-a}\right)}{4 a}-\frac{\pi \log (3) \log (a-1)}{4 a}-\frac{\pi \log \left(-\frac{1}{a+1}\right) \log \left(\frac{a+2}{a+1}\right)}{4 a}-\frac{\pi \log \left(-\frac{1}{a+1}\right) \log \left(-\frac{a}{a+1}\right)}{4 a}-\frac{\pi \log (a-1) \log (a+2)}{4 a}-\frac{\pi \log (a+1) \log (a+2)}{4 a} & (1\le a\le3), \\ -\frac{\pi \text{Li}_2\left(\frac{a^2}{a^2-1}\right)}{4 a}-\frac{\pi \log \left(-\frac{a}{a+1}\right) \log \left(\frac{1}{1-a^2}\right)}{4 a}+\frac{\pi \text{Li}_2(-2)}{4 a}+\frac{\pi \text{Li}_2(2)}{4 a}+\frac{\pi \text{Li}_2\left(-\frac{1}{2}\right)}{2 a}+\frac{\pi \text{Li}_2\left(\frac{1}{1-a}\right)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{1}{a-1}\right)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{a}{a-1}\right)}{4 a}+\frac{\pi \text{Li}_2\left(-\frac{1}{a-1}\right)}{4 a}+\frac{\pi \text{Li}_2\left(\frac{1}{a+1}\right)}{2 a}+\frac{\pi \text{Li}_2\left(\frac{a}{a+1}\right)}{2 a}-\frac{\pi \text{Li}_2\left(-\frac{a}{2}\right)}{2 a}-\frac{\pi \text{Li}_2(1-a)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a+2}{2 (1-a)}\right)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a-2}{a-1}\right)}{4 a}-\frac{\pi \text{Li}_2\left(-\frac{3}{a-1}\right)}{4 a}-\frac{\pi \text{Li}_2(a-1)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a+2}{2 (a+1)}\right)}{4 a}-\frac{\pi \text{Li}_2\left(\frac{a (a+2)}{(a+1)^2}\right)}{4 a}+\frac{\pi ^3}{3 a}-\frac{\pi \log ^2(2)}{4 a}+\frac{\pi \log ^2(a+1)}{2 a}+\frac{i \pi ^2 \log (2)}{4 a}+\frac{\pi \log (2) \log (a)}{2 a}+\frac{\pi \log (2) \log (a+2)}{2 a}-\frac{\pi \log (2) \log (a-1)}{4 a}-\frac{\pi \log (2) \log (a+1)}{4 a}+\frac{i \pi ^2 \log \left(\frac{1}{1-a}\right)}{4 a}+\frac{\pi \log (3) \log (a-2)}{4 a}+\frac{\pi \log (a-2) \log (a-1)}{4 a}+\frac{\pi \log \left(\frac{a+2}{a+1}\right) \log \left(-\frac{1}{a+1}\right)}{4 a}+\frac{\pi \log \left(-\frac{1}{a+1}\right) \log \left(-\frac{a}{a+1}\right)}{4 a}+\frac{\pi \log (a-1) \log (a+2)}{4 a}+\frac{\pi \log (a+1) \log (a+2)}{4 a}-\frac{\pi \log (a) \log (a+2)}{2 a}-\frac{\pi \log (a-2) \log \left(\frac{1}{a-1}\right)}{4 a}-\frac{\pi \log (3) \log \left(\frac{a+2}{a-1}\right)}{4 a}-\frac{\pi \log (2-a) \log (a-1)}{4 a}-\frac{\pi \log (a) \log \left(\frac{1}{a+1}\right)}{4 a}-\frac{\pi \log (3) \log \left(\frac{a-2}{a+1}\right)}{4 a}-\frac{\pi \log \left(\frac{1}{(a+1)^2}\right) \log \left(\frac{a+2}{a+1}\right)}{4 a}-\frac{\pi \log (3) \log (a+1)}{4 a}-\frac{\pi \log \left(-\frac{1}{a+1}\right) \log \left(\frac{a (a+2)}{(a+1)^2}\right)}{4 a}& (a\ge3). \end{cases}$$


Update 3: Common fourier transforms $$\begin{aligned} &1.\int_{0}^{\infty}\operatorname{Ci}(x)\cos(\omega x)\text{d}x= \begin{cases} 0 &(0\le\omega<1), \\ \displaystyle{ -\frac{\pi}{4} }&(\omega=1), \\ \displaystyle{ -\frac{\pi}{2\omega} }&(\omega>1). \end{cases}\\ &2.\int_{0}^{\infty}\operatorname{Ci}(x)\sin(\omega x)\text{d}x= \begin{cases} \displaystyle{-\frac{\ln(1-\omega^2)}{2\omega}} &(0\le\omega<1), \\ \displaystyle{ +\infty }&(\omega=1), \\ \displaystyle{-\frac{\ln(\omega^2-1)}{2\omega} }&(\omega>1). \end{cases}\\ &3.\int_{0}^{\infty}\operatorname{Ci}(x)^2\cos(\omega x)\text{d}x= \begin{cases} \displaystyle{ \frac{\pi\ln(1+\omega)}{2\omega} }&(0\le\omega\le2), \\ \displaystyle{ \frac{\pi\ln(\omega^2-1)}{2\omega} }&(\omega\ge2). \end{cases}\\ &4.\int_{0}^{\infty}\operatorname{si}(x)\sin(\omega x)\text{d}x= \begin{cases} 0 &(0\le\omega<1), \\ \displaystyle{ \frac{\pi}{4} }&(\omega=1), \\ \displaystyle{ \frac{\pi}{2\omega} }&(\omega>1). \end{cases}\\ &5.\int_{0}^{\infty}\operatorname{si}(x)\cos(\omega x)\text{d}x= \begin{cases} \displaystyle{\frac{1}{2\omega}\ln\left ( \frac{1+\omega}{1-\omega} \right ) } &(0\le\omega<1), \\ \displaystyle{ +\infty }&(\omega=1), \\ \displaystyle{\frac{1}{2\omega}\ln\left ( \frac{\omega+1}{\omega-1} \right ) }&(\omega>1). \end{cases}\\ &6.\int_{0}^{\infty}\operatorname{si}(x)^2\cos(\omega x)\text{d}x= \begin{cases} \displaystyle{ \frac{\pi\ln(1+\omega)}{2\omega} }&(0\le\omega\le2), \\ \displaystyle{ \frac{\pi}{2\omega}\ln\left ( \frac{\omega+1}{\omega-1} \right ) }&(\omega\ge2). \end{cases}\\ &7.\int_{0}^{\infty}\frac{\operatorname{Si}(x)}{x}\cos(\omega x)\text{d}x= \begin{cases} \displaystyle{-\frac{\pi}{2}\ln(\omega)} &(0<\omega\le1), \\ \displaystyle{0 }&(\omega\ge1). \end{cases}\\ \end{aligned}$$

What classes of graphs result from $\overline{T}$?

Posted: 04 Feb 2022 07:58 AM PST

I need help in characterizing the classes of graphs that results from taking the complementary of a tree, i.e., the graph that results from removing the edges of a tree from a complete graph. More formally, let $T=(V,E)$ be an $n$-vertex tree with vertex set $V$ and edge set $E$. Are there known results on the classes of graphs defined by $\overline{T}$?

There are two trivial cases. If $T=S_n$, i.e., is a star tree (one single vertex has degree $n-1$ and the other vertices have degree $1$), we have that $\overline{S_n}=\{v\}\cup K_{n-1}$, i.e., $\overline{S_n}$ is a graph with an isolate vertex (degree $0$) and a $(n-1)$-vertex clique. I've got the feeling that $\overline{P_n}$ (where $P_n$ is an $n$-vertex path graph) has a precise characterization but I can't put a name to it.

If there are no (or few) results about general $T$, can we say something about $\overline{T}$ if $T$ belongs to a class of trees, for example caterpillar trees (trees in which the removal of all leaves produces a path graph), lobster trees (trees in which the removal of all leaves produces a caterpillar tree), ...?

Any help will be appreciated. Thank you.

Note: I also posted this on mathoverflow.

Prove the existence of such an immersion

Posted: 04 Feb 2022 07:58 AM PST

This question was asked in my quiz on smooth manifolds and I was unable to solve it. I tried it again at home and still not able to solve it. Question is:

Let $\sigma$ be an integral curve for a vector field X on a manifold M, with $\sigma(0)=p$. If $X_p\neq 0$, show that there exists $\epsilon >0$ such that $\sigma : (-\epsilon , \epsilon) \to M $ is an immersion.

What I could prove is that: If $p\in M$ then there exists $\epsilon >0$ and an integral curve of X such that $\sigma : (-\epsilon ,\epsilon)\to M$ with $\sigma(0)=p$.

But How can I prove that a $\sigma$ also exists which is an immersion.

Can you please tell?

If all diagonals are drawn in a regular polygon from a vertex, the angles formed in that vertex are equal

Posted: 04 Feb 2022 07:41 AM PST

How can I prove this statement? I tried with a pentagon, but I have not achieved anything

Restriction: I can not use the circumference to prove it, and by this I mean to inscribe the polygon in the circumference. The idea is prove it with properties of congruence of triangles or properties of parallelogram or properties of quadrilateral, trapezium or trapezoid.

If all the possible diagonals are drawn in a regular polygon from a vertex, the angles formed in that vertex are equal each.

Mutual Information Always Non-negative

Posted: 04 Feb 2022 07:38 AM PST

What is the simplest proof that mutual information is always non-negative? i.e., $I(X;Y)\ge0$

Linear Algebra Journal

Posted: 04 Feb 2022 08:01 AM PST

Is there any journal which has significant material on the teaching of linear algebra. I am investigating the most effective way to teach a course on Linear Algebra. What are the most important things students should learn on a first course? What are the main questions in this field?

How to cite preprints from arXiv?

Posted: 04 Feb 2022 08:00 AM PST

Obviously when writing a math research paper it is good to cite one's references. However, with the advent of arXiv, oftentimes a paper is only available on arXiv while is awaits the long process of peer-review. But here a problem arises: how does one cite an arXiv preprint?

Note: I would be nice if a bibTeX template was included.

No comments:

Post a Comment