Sunday, June 26, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Equivalent of Post-completeness for modal logic S5; any stronger system leads to modal collapse.

Posted: 26 Jun 2022 10:42 AM PDT

I've heard before that classical propositional logic is Post-complete, for example, see this answer.

This means that given a set of axioms and inference rules $(A, I)$ for classical propositional logic, for any well-formed formula $\alpha$ such that $A \not\vdash_I \alpha$, it holds for all propositions $\varphi$ that $A, \alpha \vdash_I \varphi$. Let's call the latter system one-valued logic.

The modal logic $\mathsf{S5}$ seems to have a similar property to classical logic here. Asserting any additional well-formed formula $\alpha$ as an axiom gives us back $\mathsf{S5}$ or one-valued logic or classical logic with modal operators being interpreted as identity functions.

My question is severalfold:

  1. What is the name for this maximality property that $\mathsf{S5}$ has?
  2. How do we prove that it has it (if indeed it does have property (1))?

If the ratio of principle and the monthly simple interest is 192:1. What is the yearly rate of simple interest?

Posted: 26 Jun 2022 10:39 AM PDT

Please help me If the ratio of principle and the monthly simple interest is 192:1. How can I find the yearly rate of simple interest?

$V \in \mathbb{C}^{n \times n}$. Does $\lVert I_n - V \rVert_\infty \leq \epsilon$ imply lower bounds on the smallest eigenvalue of $V$?

Posted: 26 Jun 2022 10:36 AM PDT

Let $V \in \mathbb{C}^{n \times n}$ and $I_n$ is the $n \times n$ identity matrix. Suppose $\lVert I_n - V \rVert_\infty \leq \epsilon$, where the norm is the operator norm.

I would like to get a tight lower bound on $\lambda_{\mathrm{min}}$, the smallest eigenvalue of $V$. Intuitively, it seems the smallest eigenvalue should be at least $1 - \epsilon$, but I'm not sure how one would show that. Any ideas/suggestions?

I am struck in a problem and need help in finding monotonicity of sequence

Posted: 26 Jun 2022 10:31 AM PDT

I have done few steps while finding the monotonicity of sequence .My work is in attachment.kindly helpimage contains the question and the step I applied

Why is the radius of convergence used to ignore Big-O when observing partial fractions?

Posted: 26 Jun 2022 10:30 AM PDT

My knowledge on the concept of Big-O is very limited so I apologize. However, is this due to the radius of convergence giving us all the information needed/what we can get from what the term Big-O represents? While also providing accurate approximations of the series being observed? In my case, the maclaurin series of arcsin(x)?

Eigenvalues and Eigenvectors of a Symmetric Tridiagonal Matrix of dimension N that is not Toeplitz

Posted: 26 Jun 2022 10:30 AM PDT

From a physics paper, I have the following $N \times N$ tridiagonal matrix $K$, where, for $1 \leq \alpha, \beta \leq N$ $$K_{\alpha \beta}= \frac{(\alpha+\frac{1}{2})^{2} + l(l+1) + (\alpha-\frac{1}{2})^{2}}{\alpha^{2}}\delta_{\alpha \beta} -\frac{1}{4}\delta_{\alpha 1}\delta_{\beta 1} - \frac{1}{\alpha(\alpha+1)}\delta_{\alpha, \beta+1} - \frac{1}{\beta(\beta+1)}\delta_{\alpha+1, \beta}. $$

We can take $l$ to be a constant. I know that $K$ is tridiagonal and symmetric. Given these facts, is it possible to analytically calculate the eigenvalues and eigenvectors of $K$ as a function of $N$?

If a closed-form algebraic expression is not possible, can we find series approximations for the eigenvalues and eigenvectors, as functions of $N$?

Confusion: Cannot understand the statement that $(0,1)^2$ cannot be represeted as union of countable open balls.

Posted: 26 Jun 2022 10:24 AM PDT

In sharp contrast to the claim in Union of a countable collection of open balls, we have the following assertion in Christopher Heil's book Introduction to Real Analysis

enter image description here

Is this an error on part of the author? Please help I am confused both of these statements (including the one in the link on Math.SE) do not seem to be true at the same time

Linear operator to zero power

Posted: 26 Jun 2022 10:24 AM PDT

When T is any linear operator acting on a vector space V, and n is a natural number, $T^n$ means T applied n times (composition) and is also a linear operator.

When T is a nonzero linear operator acting on a vector space V, $T^0$ is the identity operator. But I think that should also be true when T is the zero operator i.e. the operator which sends all vectors to the zero vector.

Why? Because $T^0 $ means that we are not applying any operator. So it makes sense to say all vectors stay unchanged by $T^0 $ even when T is the zero operator.

Is that indeed so?

I am asking because this kind of doesn't agree with what we have for real numbers where $0^0 $ is usually left undefined.

Principle : Monthly rate of simple interest = 192:1. What is yearly rate of simple interest?

Posted: 26 Jun 2022 10:12 AM PDT

If the ratio of principle and the monthly simple interest is 192:1. What is the yearly rate of simple interest?

Conditional Expectation of multivariable function

Posted: 26 Jun 2022 10:30 AM PDT

Suppose I have two random variables $X$ and $Y$ which is not independent. Also, I have a function $g:\mathbb{R}^2\rightarrow \mathbb{R}$ and I know that for any fixed $y\in \mathbb{R}$ we have $$\mathbb{E}[g(X,y)]=h(y),$$ where $h$ Is known. Is it right to say

$$\mathbb{E}[g(X,Y)|Y=y]=h(y)?$$ This sounds correct to me but I don't know how to justify it theoretically. Thank you very much in advance.

Calculating time from acceleration and distance travelled

Posted: 26 Jun 2022 10:23 AM PDT

I have an interesting question that has me stumped. If an object is free falling (so accelerating at 9.8m/s^2), by gravity, and it traveled a distance of 26cm, how long did that take? The initial velocity is 0 m/s and I don't know the final velocity. I'm doing a reaction time test but can't work out the actually time. Thanks in advance to all who participated. MathsCuriosity

Primes of the form $2n+1$

Posted: 26 Jun 2022 10:39 AM PDT

Let $\sigma(n)$ denotes the divisor function which sums the divisors of $n$, an integer $ \geq 1$.
We introduce the function $f(n)$ such that:

$$f(n) = 1+(n!)^2-\sigma(n!)(n!)^2+2\sum_{k=1}^{-1+\sigma(n!)}\left \lfloor \frac{k(1+(n!)^2)}{\sigma(n!)} \right\rfloor$$

When $f(n)=2n+1$, is $2n+1$ always prime?

I failed to compute big numbers because of factorials.

The first examples are:

$ 1, 1, 1, 1, 1, 13, 1, 17, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 61, 1, 1, 1, 1, 1, 1, 1, 1, 61, 1, 1, 1, 193, 1, 1, 1, 757, 61, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 109, 1, 1, 1, 181, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 113...$

And for example for $n=6$ we have the prime number $13$ that is of the form $2(6)+1$.

Thanks.

Locally isomorphic singularities = Locally isomorphic minimal blow-ups?

Posted: 26 Jun 2022 10:08 AM PDT

Let's say that one has two varieties $V_1, V_2$, with singularities at points $x_1$ and $x_2$ respectively, and that there exist open neighbourhoods $U_1$ and $U_2$ of these singularities such that $U_1$ and $U_2$ are isomorphic, i.e. such that $V_1$ and $V_2$ are locally isomorphic around their singularities.

Is it then always a given that if we have minimal resolutions $W_1$ and $W_2$ of $V_1$ and $V_2$ at these singularities, then around the exceptional divisors $D_1$ and $D_2$, there exists open neighbourhoods $U'_1$ and $U'_2$ such that $U'_1$ and $U'_2$ are isomorphic?

Further, if this is the case, is it then a given that $\mathcal{O}_{W_1} (U'_1) \cong \mathcal{O}_{W_2} (U'_2)$?

Prove or disprove that the statement P implies Q (and conversely) for T: $\mathbb R^2 \to\mathbb R^2$ be a linear transformation.

Posted: 26 Jun 2022 10:09 AM PDT

Question: Let $T:\mathbb R^2\to \mathbb R^2$ be a linear transformation and let A be the matrix representation of T with respect to the standard basis of $\mathbb R^2$.Consider two statements:

(P): There are exactly two distinct lines $L_1 ,L_2 \in \mathbb R^2$ passing through the origin that are mapped onto themselves: $T(L_1)=L1$ and $T(L_2)=L_2$.

(Q):The matrix A has two distinct nonzero real eigenvalues.

My Attempt:

Given that $T:\mathbb R^2 \to \mathbb R^2$ is a linear transformation and A is its matrix representation then the lines passing through origin will look like $y=mx$, when it comes to "mapping onto itself", I thought about $T(x)=x$ and $T(-x)=-x$.

Clearly we have that $T^n(x)=x$ and $T^n(-x)=-x$ for all n.Thus I see only these two distinct lines mapping onto "themselves" and they have $1$ and $-1$ eigenvalues respectively (So (P) implies (Q)) . Can we have a counter for Q implies P? I didn't got that. Any hint for constructing the theoretical proofs?Thanks.

Feynman´s trick to solve $\int_0^\infty \frac{\arctan(x)}{\sqrt{x}(1+x^2)}\,dx$

Posted: 26 Jun 2022 10:14 AM PDT

I wanted to evaluate the integral

\begin{align*} \int_0^\infty \frac{\arctan(x)}{\sqrt{x}(1+x^2)}\,dx=\frac{\pi^2}{4\sqrt{2}}-\frac{\pi \ln(2)}{2\sqrt{2}} \tag{1} \end{align*}

I thought of using Feynman´s trick by considering the integral

$$ \begin{align*} I(a)&=\int_0^\infty \frac{\arctan(ax)}{\sqrt{x}(1+x^2)}\,dx \tag{2} \end{align*} $$

Differentiating $(2)$ w.r. to $a$ we obtain:

\begin{align*} I^\prime(a)&=\int_0^\infty \frac{x}{\sqrt{x}(1+x^2)(1+a^2x^2)}\,dx\\ &=\int_0^\infty \frac{x^{1/2}}{(1+x^2)(1+a^2x^2)}\,dx\\ &=\frac{1}{a^2-1}\left(a^2 \int_0^\infty \frac{x^{1/2}}{1+a^2x^2}\,dx-\int_0^\infty \frac{x^{1/2}}{1+x^2}\,dx\right)\\ &=\frac{1}{a^2-1}\left(\sqrt{a} \int_0^\infty \frac{x^{1/2}}{1+x^2}\,dx-\int_0^\infty \frac{x^{1/2}}{1+x^2}\,dx\right)\\ &=\frac{1}{a^2-1}\left(\frac{\sqrt{a}}{2} \int_0^\infty \frac{x^{-1/4}}{1+x}\,dx-\frac12\int_0^\infty \frac{x^{-1/4}}{1+x}\,dx\right)\\ &=\frac{\pi}{\sqrt{2}}\left( \frac{\sqrt{a}}{a^2-1}-\frac{1}{a^2-1}\right)\\ \end{align*}

Integrating back

\begin{align*} I(a)&=\frac{\pi}{\sqrt{2}}\left( \int\frac{\sqrt{a}}{a^2-1}\,da-\int\frac{1}{a^2-1}\,da\right)\\ &=\frac{\pi}{\sqrt{2}}\left( \int\frac{1}{(1-a)(1+a)}\,da-\int\frac{\sqrt{a}}{(1-a)(1+a)}\,da\right)\\ &=\frac{\pi}{\sqrt{2}}\left(\frac12 \int\frac{da}{1-a}+\frac12 \int\frac{da}{1+a}-\frac12 \int\frac{\sqrt{a}}{1-a}\,da-\frac12 \int\frac{\sqrt{a}}{1+a}\,da\right)\\ &=\frac{\pi}{\sqrt{2}}\left(\frac12\ln\left(\frac{1+a}{1-a} \right)- \int\frac{u^2}{1-u^2}\,du- \int\frac{u^2}{1+u^2}\,du\right)\\ &=\frac{\pi}{\sqrt{2}}\left(\frac12\ln\left(\frac{1+a}{1-a} \right)+ \int\frac{u^2-1+1}{u^2-1}\,du- \int\frac{u^2+1-1}{u^2+1}\,du\right)\\ &=\frac{\pi}{\sqrt{2}}\left(\frac12\ln\left(\frac{1+a}{1-a} \right)- \int\frac{du}{1-u^2}+ \int\frac{du}{u^2+1}\right)\\ &=\frac{\pi}{\sqrt{2}}\left(\frac12\ln\left(\frac{1+a}{1-a} \right)- \frac12\ln\left(\frac{1+u}{1-u} \right)+ \arctan(u)+C\right)\\ &=\frac{\pi}{\sqrt{2}}\left(\frac12\ln\left(\frac{1+a}{1-a} \right)- \frac12\ln\left(\frac{1+\sqrt{a}}{1-\sqrt{a}} \right)+ \arctan\left(\sqrt{a}\right)+C\right)\\ &=\frac{\pi}{\sqrt{2}}\left(\operatorname{arctanh}(a)- \operatorname{arctanh}\left(\sqrt{a} \right)+ \arctan\left(\sqrt{a}\right)+C\right)\\ \end{align*}

Now, if we set $a=0$ in $(2)$ we find that $C=0$. Therefore

\begin{align*} \int_0^\infty \frac{\arctan(ax)}{\sqrt{x}(1+x^2)}\,dx=\frac{\pi}{\sqrt{2}}\left(\operatorname{arctanh}(a)- \operatorname{arctanh}\left(\sqrt{a} \right)+ \arctan\left(\sqrt{a}\right)\right) \tag{3} \end{align*}

Supposedly, we should now let $a \to 1$ in $(3)$ to find $(1)$, but for $\lim_{a \to 1} \operatorname{arctanh}(a) \to \infty$. Is there a way to fix this problem so Feynman´s trick is still applicable?

To add a backgroud to the question. I was originally trying to solve the integral $\int_0^{\pi/2}\frac{x}{\sqrt{\tan(x)}}\,dx$. By a obvious change of variable we find that $\int_0^{\pi/2}\frac{x}{\sqrt{\tan(x)}}\,dx=\int_0^\infty \frac{\arctan(x)}{\sqrt{x}(1+x^2)}\,dx$. Now observe that

\begin{align*} \int_0^{\pi/2}\frac{x}{\sqrt{\tan(x)}}\,dx&=\frac{\pi}{2}\int_0^{\pi/2}\sqrt{\tan(x)}\,dx-\int_0^{\pi/2}x\sqrt{\tan(x)}\,dx & \left(x \to \frac{\pi}{2}-x \right)\\ &=\frac{\pi}{2}\frac{\pi}{\sqrt{2}}-\int_0^{\pi/2}x\sqrt{\tan(x)}\,dx & ( \text{by beta function})\\ &=\frac{\pi^2}{2\sqrt{2}}-\int_0^\infty \frac{\sqrt{x}\arctan(x)}{1+x^2}\,dx\\ &=\frac{\pi^2}{2\sqrt{2}}-\left(\frac{\pi}{2\sqrt{2}}\ln(2)+\frac{{\pi}^2}{4\sqrt{2}}\right) \end{align*}

last line result follows from this post

Determine P(A|(A U B))

Posted: 26 Jun 2022 10:29 AM PDT

Given, $P(A) = 0.2, P(B) = 0.5, P(A' \cup B') = 0.9$, find $P(A|A\cup B)$.

What I did:

$$P(A|(A \cup B)) = \frac{P(A \cap (A \cup B))}{P(A \cup B)}$$

Now I don't understand how to proceed with this.

Show that $\int |g_n - g|\to 0$ if and only if $\int |f_n| \to \int |f|$

Posted: 26 Jun 2022 10:31 AM PDT

Let $(g_n), g$ be (Lebesgue) integrable functions (i.e. $\int |g_n|, \int |g| < \infty$). Suppose $g_n\to g$ almost everywhere. Show that $\int |g_n - g|\to 0$ if and only if $\int |g_n| \to \int |g|$.

Suppose $D$ is the (common) domain of the functions. For the only if statement, if $\int |g_n - g|\to 0$. Then $|\int |g_n| - \int |g|| \leq \int ||g_n|-|g|| \leq \int |g_n-g|\to 0.$ The other direction seems trickier. I tried using Egorov's theorem, but that only works if $m(D) < \infty$. And even if $m(D) < \infty$, we just have that for all $\epsilon > 0, \exists F\subseteq D$ so that $m(D\backslash F) < \epsilon$ and $g_n\to g$ uniformly on $F$. The Dominated convergence theorem doesn't seem to apply here either as if $\int |f|\leq \int |g|$, that doesn't necessarily imply that $|f|\leq |g|$ almost everywhere. So I can't seem to find a function $g\in L^1$ so that $|f_n|\leq g$ almost everywhere for each $n$.

Can you solve this rigid and crazy integral$?$

Posted: 26 Jun 2022 10:32 AM PDT

Find the value of the following integral $$\int\frac{4x\ln(x)}{3x^2+2x+1}dx$$ I have no idea how to solve this integral. I tried all the ways of doing it(parts, substitution and many more) but I got stuck every time at the beginning. Please help me to solve this. By the way, calculations involving complex numbers, polylogarithms and special integrals are allowed.

Any help will be greatly appreciated.

EDIT: There are no bounds on the integral. EDIT: I'm asking for the anti derivative.

Find processes power spectral density $S(ω)$ of random process whose estimated covariance sequence is $r(τ)=2δ[τ]+δ[τ-1]+δ[τ+1]$

Posted: 26 Jun 2022 10:30 AM PDT

You have estimated the covariance sequence of a random process as $$r(\tau)=2\delta[\tau]+\delta[\tau-1]+\delta[\tau+1]$$ Compute the processes power spectral density (PSD) $S(\omega)$

Hints:

  • $S(\omega)=\sum_{l=-\infty}^\infty r(l)e^{-j\omega l}$
  • $e^{jx}=\cos(x)+j\sin(x)$

I really have no idea how to calculate it.

Anyone know function that is like Sin or Cos but with pointy tip?

Posted: 26 Jun 2022 10:14 AM PDT

Basically like what the title saying, anyone know function that is like Sin or Cos but with pointy tip?
image of function that I want to achieve

Edit: There seems to be misunderstanding, since I can't draw it well on the image. I don't want it to be striped, but I want them to be connected line like Sin and Cos the only difference is just that the tip is pointy.
better image for the function

Why does $\int_{-x}^x f^\alpha\leq f(-x)+f(x)$ imply $f=0$?

Posted: 26 Jun 2022 10:34 AM PDT

Let $0\leq f\in C(\mathbb{R})$, and for some $\alpha>1$, we have \begin{equation*}\begin{aligned} \int_{-x}^x f^\alpha\leq f(-x)+f(x), \forall\ x\in [0,+\infty). \end{aligned}\end{equation*} Prove that $f\equiv 0$.

Let $F(x)=\int_{-x}^x f^\alpha$, then $F'(x)=f^\alpha(x)+f^\alpha(-x) \leq [f(x)+f(-x)]^\alpha$. What to do next? Any ideas?

$\sin(25°)+\cos(115°)$? [closed]

Posted: 26 Jun 2022 10:19 AM PDT

What is the value of $\sin(25°)+\cos(115°)$?

Using $\cos(90°+\theta)=-\sin(\theta)$, we get, $$\sin(25°)+\cos(115°)=\sin(25°)-\sin(25°)=0$$

But when I searched the same on Google, it showed $-0.45816155531$ as result on their calculator.

Which result is correct? Also, why the other one is incorrect?

How to understand what property is necessary for the primitiveness of a group?

Posted: 26 Jun 2022 10:29 AM PDT

Under what condition the group $G=\langle a,b\rangle$, where $a=(i,j),g=(1,2,\ldots, n),$ will be primitive.
I considered some properties, such as $g \cdot g$ is equal to the shift of the bottom line in a two-line entry by $2$ and $g \cdot g\cdot g$ equals a shift of $1$. But it seems to me that sorting through all possible variants of the multiplication $a$ and $b$ is not the best option. Maybe I should use the proof of the Jordan's theorem ($G \lt S(\Omega)$ and $G$ contains a transposition $(\alpha , \beta) \in G \Rightarrow G = S(\Omega)$)?

What is the distribution of the number of boys standing between the leftmost girl and the rightmost girl?

Posted: 26 Jun 2022 10:05 AM PDT

$10$ Boys and $10$ Girls get ordered in a line. How is $X$, the number of boys standing between the leftmost girl and the rightmost girl, distributed?

I tried thinking of selecting one place from the $20$ for the leftmost girl, and then selecting k places from the $19$ left for the boys. Or selecting one place from the $19$ left for the rightmost girl. I can't figure how to solve this.

Any help is appreciated.

Equivalence between bi-invariant metrics on Lie groups and Symmetric spaces

Posted: 26 Jun 2022 10:23 AM PDT

Let $G$ be a simply connected Lie group with Lie algebra $\mathfrak{g}$ and $K$ a connected closed Lie subgroup of $G$ with Lie algebra $\mathfrak{s}$. Then $G/K$ is a homogeneous space. Equip $G$ with a left-invariant Riemannian metric $Q_G$ on $G$ and let $Q_{G/K}$ be the unique Riemannian metric on $G/K$ so that $\pi: G \to G/K$ is a Riemannian submersion (that is, so that $G/K$ is a Riemannian homogeneous space). Let $\mathfrak{h}$ be the orthogonal complement of $\mathfrak{s}$ with respect to $Q_G$, so that $\mathfrak{g} = \mathfrak{s} \oplus \mathfrak{h}$.

  1. If $Q_G$ happens to be bi-invariant (so that $Q_{G/K}$ is $G$-invariant), then is $G/K$ necessarily symmetric?
  2. Conversely, if $G/K$ happens to be symmetric, then is $Q_G$ necessarily bi-invariant?

1) I believe this can be proved by showing that Cartan's decomposition holds: $$[\mathfrak{h}, \mathfrak{h}] \subset \mathfrak{s}, \ [\mathfrak{h}, \mathfrak{s}] \subset \mathfrak{h}, \ [\mathfrak{s}, \mathfrak{s}] \subset \mathfrak{s}.$$ Observe that $[\mathfrak{s}, \mathfrak{s}] \subset \mathfrak{s}$ is trivially true since $K$ is a Lie subgroup of $G$. For the other two, I thought to argue as follows:

Let $\xi \in \mathfrak{h}$, $\eta, \sigma \in \mathfrak{s}$. Because the metric is bi-invariant (and hence $\text{Ad}$-invariant),

$$0 = \left< \xi, \sigma \right> = \left< \text{Ad}_{\text{Exp}(t \eta)} \xi, \text{Ad}_{\text{Exp}(t\eta)} \sigma\right>.$$ Taking the derivative at $t = 0$ yields $$0 = \left< [\eta, \xi], \sigma \right> + \left< \xi, [\eta, \sigma]\right> = \left< [\eta, \xi], \sigma \right>$$ since $[\eta, \sigma] \in \mathfrak{s}$. Hence $[\eta, \xi] \in \mathfrak{h}$, and so $[\mathfrak{h}, \mathfrak{s}] \subset \mathfrak{h}$. Repeating this argument with $\xi, \eta \in \in \mathfrak{h}, \sigma \in \mathfrak{s}$ similarly yields $[\mathfrak{h}, \mathfrak{h}] \subset \mathfrak{s}$. It can be seen that the map $L = \text{Id}_{\mathfrak{s}} - \text{Id}_{\mathfrak{h}}$ is an involutive automorphism of $\mathfrak{g}$ and since $G$ is simply connected, there exists an involutive automorphism $\sigma: G \to G$ with $K = G_{\sigma}$ (the fixed point set of $\sigma$). It follows that $G/K$ is symmetric.

2) I believe this is false. We can take $K = \{e\}$, from which we get $G/K \cong G$, and this post seems to suggest that there exist Lie groups which do not admit bi-invariant metrics but nevertheless symmetric spaces. Though the answer avoids taking a firm stand one way or another, and just proposes a strategy that would yield a counter-example if one existed.

Points of an equilateral pentagon from centroid and side length?

Posted: 26 Jun 2022 10:06 AM PDT

I was wondering if it was possible to calculate the coordinates of every vertex in an equilateral pentagon using only its side length and its centroid?

The pentagon's centroid will not be fixed at the origin, and thus could be any x,y.

Please let me know if you can help!

How to go from $\frac{\|g_{k+1}\|}{\|g_{k}\|^2} \leq c$ to $\frac{\|x_{k+1} - x_*\|}{\|x_{k} - x_*\|^2} \leq c$ as $k\to \infty$

Posted: 26 Jun 2022 10:30 AM PDT

I am reading a paper where

$$\frac{\|g_{k+1}\|}{\|g_{k}\|^2} \leq c \tag{1}$$ as $k\to \infty$ where $g_k = g(x_k) \stackrel{\Delta}{=} \nabla f(x_k)$ and $\| \nabla^2 f(x_k)\| \leq L_H$. Then, it is stated that from Taylor expansion of $g_k$ and $g_{k+1}$ around $x_*$ and from $g(x_*)=0$ we get

$$\frac{\|x_{k+1} - x_*\|}{\|x_{k} - x_*\|^2} \leq c \tag{2}$$ as $k\to \infty$. I know that the Taylor expansion is

$$g_{k+1} = \nabla f (x_k + p_k) = \nabla f (x_k) + \int_{0}^{1} \nabla^2 f (x_k + t p_k) p_k dt$$ with $p_k = x_{k+1} - x_k$ but I found it confusing to apply it in $x_*$ instead of $x_k$. The most possibly useful relation I have found is

$$\nabla f_k -\nabla f_* = \int_{0}^{1} \nabla^2 f (x_k + t(x_* -x_k))(x_k - x_*)dt \tag{3}$$ in the proof of Theorem 3.5 in [1] but again I could not understand how it is derived from Taylor's theorem in Theorem 2.1 in [1]. Taking the norm of $(3)$ we get

$$\begin{aligned}\| \nabla f_k\| \leq & \underbrace{\|\nabla f_k \|}_{0}+ \|\int_{0}^{1} \nabla^2 f (x_k + t(x_* -x_k))(x_k - x_*)dt \| \\ \leq & \underbrace{\|\nabla f_k \|}_{0}+ \int_{0}^{1} \|\nabla^2 f (x_k + t(x_* -x_k))(x_k - x_*)\|dt \leq L_H \|x_k - x_* \|\end{aligned}\tag{4}$$ where $\|\nabla^2 f(x) \| \leq L_H$ is used. Similarly for step $k+1$ we get

$$\| \nabla f_{k+1}\| \leq \|x_{k+1} - x_* \|$$ which get as close to $(2)$. Could you please someone help to proceed?

[1]Jorge Nocedal, Numerical Optimization

Is this a sound line of reasoning to conclude that $\sqrt[n]{n!} \sim \frac{n}{e}$?

Posted: 26 Jun 2022 10:39 AM PDT

In the paper Decomposable Searching Problems I. Static-to-Dynamic Transformations by Bentley and Saxe, the authors state without proof that

$$\sqrt[n]{n!} \sim {\frac{n}{e}}\text.$$

I have a line of reasoning below that I think correctly proves this, but I'm a bit shaky about whether I can apply tilde approximations this way. Is this reasoning correct?

Using Stirling's approximation, we know that

$$n! \sim \left(\frac{n}{e}\right)^n \sqrt{2\pi n}\text.$$

Taking $n$th roots then gives

$$\sqrt[n]{n!} \sim \left( \frac{n}{e} \right) \sqrt[2n]{2 \pi n} \sim \frac{n}{e}\text.$$

The step I'm uncomfortable about is whether I can take $n$th roots of both sides of the tilde expression. This feels like it "ought" to work, but I've been surprised by tilde expressions in the past and wouldn't want to bet the farm on it. :-)

Criteria for Hausdorff

Posted: 26 Jun 2022 10:12 AM PDT

Let $f:X\to Y$, $Y$ is Hausdorff, and $f$ is continuous. How to prove that $f$ is injective if $X$ is Hausdorff?

It is easy enough to show that $f$ injective implies $X$ Hausdorff, and I have been able to find examples where, if $f$ is not injective, $X$ is not Hausdorff. However, is it possible to prove the above statement in general? I cannot seem to work it out from the definitions.

Proof of $\frac{1}{e^{\pi}+1}+\frac{3}{e^{3\pi}+1}+\frac{5}{e^{5\pi}+1}+\ldots=\frac{1}{24}$

Posted: 26 Jun 2022 10:03 AM PDT

I would like to prove that $\displaystyle\sum_{\substack{n=1\\n\text{ odd}}}^{\infty}\frac{n}{e^{n\pi}+1}=\frac1{24}$.

I found a solution by myself 10 hours after I posted it, here it is:

$$f(x)=\sum_{\substack{n=1\\n\text{ odd}}}^{\infty}\frac{nx^n}{1+x^n},\quad\quad g(x)=\displaystyle\sum_{n=1}^{\infty}\frac{nx^n}{1-x^n},$$

then I must prove that $f(e^{-\pi})=\frac1{24}$. It was not hard to find the relation between $f(x)$ and $g(x)$, namely $f(x)=g(x)-4g(x^2)+4g(x^4)$.

Note that $g(x)$ is a Lambert series, so by expanding the Taylor series for the denominators and reversing the two sums, I get

$$g(x)=\sum_{n=1}^{\infty}\sigma(n)x^n$$

where $\sigma$ is the divisor function $\sigma(n)=\sum_{d\mid n}d$.

I then define for complex $\tau$ the function $$G_2(\tau)=\frac{\pi^2}3\Bigl(1-24\sum_{n=1}^{\infty}\sigma(n)e^{2\pi in\tau}\Bigr)$$ so that $$f(e^{-\pi})=g(e^{-\pi})-4g(e^{-2\pi})+4g(e^{-4\pi})=\frac1{24}+\frac{-G_2(\frac i2)+4G_2(i)-4G_2(2i)}{8\pi^2}.$$

But it is proven in Apostol "Modular forms and Dirichlet Series", page 69-71 that $G_2\bigl(-\frac1{\tau}\bigr)=\tau^2G_2(\tau)-2\pi i\tau$, which gives $\begin{cases}G_2(i)=-G_2(i)+2\pi\\ G_2(\frac i2)=-4G_2(2i)+4\pi\end{cases}\quad$. This is exactly was needed to get the desired result.

Hitoshigoto oshimai !

I find that sum fascinating. $e,\pi$ all together to finally get a rational. This is why mathematics is beautiful!

Thanks to everyone who contributed.

No comments:

Post a Comment