Saturday, July 23, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Are there anymore journals that accept poetry?

Posted: 21 Jul 2022 10:17 PM PDT

I came across Mathematical Intelligencer, which is journal that accept poetry and humor articles. I do have a poem on maths which I sent to MI. They might accept it or not, in case they do not, are there any other such journals in which I can send my poem?

How to find dominating function of $\frac{1}{x^2+\ln x}$ and show $\int _1^{\infty } \, \frac{dx}{x^2+ \ln x}$ converge?

Posted: 21 Jul 2022 10:17 PM PDT

I want to find a dominating function of $$\frac{1}{x^2 + \ln x}$$ and use it to show the convergence of the integral form.

Find pair of product of four groups that has the same order, but not isomorphic.

Posted: 21 Jul 2022 10:14 PM PDT

Find a set of product of four groups (from the set of choices given) that have equal order, but are not isomorphic.

The goal is to show that just having the same order of product is not helpful in Isomorphism.

Choices to form two equal order products of four groups: $C_1, C_2, C_4, C_8, C_{16}, Q_8.$

Note: Write $C_1$ to fill out any unneeded factors.


Need find products of four groups, and to form such two products having equal order, but not isomorphic.

Le, the first product of four groups be denoted by $G= a\times b\times c\times d$, and the second by $H= e\times f\times g\times h.$

The choice need to show equal order, while failing at Isomorphism.

So, $G_1= C_{16}\times C_1\times C_1\times C_1$, and $H= C_8\times C_8\times C_1\times C_1$ will not help; as then will have equal orders as well as Isomorphism too.

A possible choice is : $G_1= C_{8}\times C_1\times C_1\times C_1$, and $H= Q_8\times C_1\times C_1\times C_1$

Another choice: $G_1= C_{16}\times C_1\times C_1\times C_1$, and $H= Q_8\times C_8\times C_1\times C_1$

But, the answer key states only one choice is possible for determining $G,H$; as hint.

Expanding $\prod_{i=1}^n (a_i+b_i)$

Posted: 21 Jul 2022 10:13 PM PDT

I am trying to find an expansion of $$\prod_{i=1}^n (a_i+b_i)$$ Any help would be appreciated. Thanks

Proving and intuiting why all two variable linear equations are on a line.

Posted: 21 Jul 2022 09:46 PM PDT

I'm just starting Linear Algebra, and one of the first points made is that for any two variable linear equation, all its solutions lie on one line. I'm trying to intuit that and prove it. i.e. That for Ax + By + C = 0, every solution is on the same line, more explicitly that every point on the line is a solution and every other point is not a solution.

How do I prove that cos(pi/5) = phi/2? [duplicate]

Posted: 21 Jul 2022 09:41 PM PDT

This is linked to the golden triangle. I find it quite fascinating that cos(pi/5) = phi/2, but how do I start to find an exact solution to cos(pi/5) if all I know about the golden triangle is that the ratio of a/b = phi?

Notation of conjugate operator, in functional analysis.

Posted: 21 Jul 2022 09:29 PM PDT

I'm confused about the definition of the conjugate operator in functional analysis.


Definition

Let $X,Y$ be normed vector space.

For $A\in \mathcal L(X,Y)$, define $A':Y^\ast \to X^\ast$ by $\langle x, A' g\rangle=\langle Ax, g\rangle$ for $x\in X, g\in Y^\ast.$

$A'$ is called conjugate operator.


$X,Y$ is normed space so inner product doesn't necessarily defined in $X,Y,$ so $\langle \cdot, \cdot \rangle$ isn't inner product.

Does $\langle \cdot, \cdot \rangle$ mean dual pairing ? If so, $\langle x, A' g\rangle=(A'g)(x), \langle Ax, g\rangle=g(Ax)$, right ?

Density of sum of two independent random variables

Posted: 21 Jul 2022 10:00 PM PDT

I am trying to solve the following problem

Suppose that $X$ and $Y$ are independent random variables with density functions $$ f_{X}(x)=\left\{\begin{array}{ll} 2 e^{-2 x}, & x \geq 0 \\ 0, & x<0, \end{array} \quad \text { and } \quad f_{Y}(x)= \begin{cases}4 x e^{-2 x}, & x \geq 0 \\ 0, & x<0\end{cases}\right. $$ Find the density function of $X+Y$.

Since $X$ and $Y$ are independent, the density of their sum is given by the convolution of their densities.

$$ f_{X+Y}(x)=f_{X} * f_{Y}(x)=\int_{-\infty}^{\infty} f_{X}(\tau) f_{Y}(x-\tau)d\tau=\int_{0}^\infty 2e^{-2 \tau}4 (x-\tau) e^{-2 (x-\tau)}d\tau $$

$$=\int_{0}^\infty 8 (x-\tau) e^{-2x}d\tau$$

However this integral diverges. What have I done wrong?

Are these two methods for obtaining the mid point equivalent?

Posted: 21 Jul 2022 09:17 PM PDT

I was working on computer science problem on LeetCode website. The solution required that the midpoint be calculated between two numbers, lets call them start and end. Ultimately my solution passed the tests. The issue is that one of the other solutions I found had a similar solution, but it calculated the midpoint differently.

This is how I calculated it: mid = (start + end) / 2
This is how the other solution calculated it: mid = start + (end - start) / 2

Are these two functions equivalent? I tried to use my math skills, but they seem to have deteriorated over the decades. This is what I came up with so far, but just need some insight. (start + end) / 2 = start + (end - start) / 2
start + end = (start + (end - start)/2) * 2
start + end = 2 * start + (end - start)
start + end = start + end

Is the statement valid?

Posted: 21 Jul 2022 10:16 PM PDT

Now consider a point just next to point 𝑃(𝑥,𝑦), say 𝐻(𝑥+ℎ,𝑦′). Here, ℎ is infinitesimally small, such that ℎ→0. Now, the slope of the line between these 2 points will give us the instantaneous rate of change as after that instant we will move on the next instant/point.

Above statement is very important to understand. Point 𝑃 and 𝐻 are actually next to each other! (that's how we have defined). Even if you see a curve of 𝑥2 on your screen and take 2 pixels that are next to each other to be lying on the curve, they are still actually not next to each other with respect to the curve as between those 2 pixels there are infinitely many points but the points that we have taken, that is, 𝑃 and 𝐻 are next to each other!

The above two paragraphs are from an answer on this forum link to answer. I see flaws in the answer, but I was confused since it was getting upvotes. My question is, how can you take two points next to each other? Doesn't that violate the density property of real numbers?

For context, the answer was written to explain what derivatives are and what they mean. I just feel the statements used are mathematically invalid; if they are then the answer should be removed as it would mislead many. Let me know your thoughts.

Problem 24, Chapter 4 of Introduction to Analytic Number Theory by Apostol:

Posted: 21 Jul 2022 09:40 PM PDT

Let $A(x)$ be defined for all $x >0$ and assume that

$$T(x) = \sum_{n\le x} A(x/n) = ax\log x + bx + o\left(\frac{x}{\log x}\right),~~~~ \text{as}~~x \rightarrow \infty$$

where $a$ and $b$ are constants. Prove that:

$$A(x)\log x + \sum_{n\le x} A(x/n) \Lambda (n) = 2ax\log x + o(x\log x), ~~~~ \text{as}~~x \rightarrow \infty$$ (Here $\Lambda(n)$ is the Mangoldt function). Verify that Selberg's formula of Theorem 4.18 is a special case.

I am having difficulty in seeing how Selberg's formula $$\psi(x) \log x + \sum_{n\le x} \psi(x/n) \Lambda (n) = 2x\log x + O(x)$$ follows from the first part of the problem: Using $A(x) = \psi(x)$, $a=1$ and $b=-1$ of course satisfy the assumptions of the problem but Selberg's formula gives an estimate of $O(x)$ whereas the result of this problem gives an error of $o(x\ln(x))$. Does $o(x\ln(x))$ imply $O(x)$? (I don't think so!) Selberg's formula uses the stronger estimate of $O(\log x)$ for the sum $\sum_{n \le x} \psi(x/n)( = x\log x -x +O(\log x))$, so I am not sure how the weaker assumption of $o(x/\log x)$ leads to the stronger estimate of $O(x)$.
What am I missing here? Any help is appreciated.

In what sense is a Donut isomorphic to a Mug?

Posted: 21 Jul 2022 09:58 PM PDT

I am studying a bit of Topology and I am utterly confused when trying to figure out what sense people say that stuff like "A donut is isomorphic to a cup".

What topology (open set definition) are we putting on donut? Is it the induced topology from the ambient space? If so, does it mean we necessarily need ambient space to talk about these things like "topological invariants"?

Let $V,W$ be two vector spaces, $T \in L(V,W)$. If $\{v_i\}_{i=1}^p$ are linearly independent in $V$, is $\{T(v_i)\}_{i=1}^p$ so in $W$?

Posted: 21 Jul 2022 09:20 PM PDT

Question: Suppose $T : V \to W$ is one-to-one and linear. Show that linearly independent vectors in $V$ have linearly independent images under $T$; that is, if $$\{v_1,v_2,...,v_n\}$$ is linearly independent in $V$, then $$\{T(v_1),T(v_2),··· ,T(v_n)\}$$ is linearly independent in $W$.


I know that since $T$ is one-to-one, the kernel of $T$ contains only the zero vector.

I am supposed to prove this by contradiction.

Let's assume that $$\{T(v_1),T(v_2),··· ,T(v_n)\}$$ is linearly dependent. Then there exists a scalar $c_1, c_2, ... , c_p$ such that $$c_1T(v_1) + c_2T(v_2) + ... + c_pT(v_p) = 0$$ where $c_1, c_2, ... , c_p$ are not all $0$.

Am I on the right track so far? How do I get to the point where I can reach my contradiction? Or is there a better way of proving the question?

If $\{v_1+v_2, v_2+v_3, v_1+v_3\}$ are linearly independent then $\{v_1, v_2, v_3\}$ are linearly independent

Posted: 21 Jul 2022 10:02 PM PDT

Problem. Prove that

for $v_1, v_2, v_3 \in \mathbb{R}^3$, if $\{v_1+v_2, v_2+v_3, v_1+v_3\}$ are linearly independent then $\{v_1, v_2, v_3\}$ are linearly independent.


What I tried:
Let $m,n,p \in \mathbb{R}$ be such that $$mv_1+nv_2+pv_3 = 0\;(\star)$$ From the hypothesis we know that if $a,b,c \in \mathbb{R}$ with $a(v_1+v_2)+b(v_2+v_3)+c(v_1+v_3) = 0$, then $a=b=c=0$.

First, every element $\begin{pmatrix}m\\n\\p \end{pmatrix} \in \mathbb{R}^3$ can be unique written in terms of $A = \biggl\{\begin{pmatrix}1\\0\\1 \end{pmatrix},\begin{pmatrix}1\\1\\0 \end{pmatrix},\begin{pmatrix}0\\1\\1 \end{pmatrix}\biggr\}$ because $A$ is a basis in $\mathbb{R}^3$, so we can let $\begin{cases} m=a+b \\ n=b+c \\ p=a+c \end{cases}$. So, from $$ \begin{align} (\star) \implies (a+b)v_1 + (b+c)v_2+(a+c)v_3=0 \\ \iff av_1+bv_1+bv_2+cv_2+av_3+cv_3=0 \\ \iff a(v_1+v_3)+b(v_1+v_2)+c(v_2+v_3)=0 \\ \implies a=b=c=0 \implies m=n=p=0 \end{align}$$

$\implies \{v_1, v_2, v_3\}$ are linearly independent

Please correct me if I am wrong or not. Thanks!

calculate the limit of a function

Posted: 21 Jul 2022 09:17 PM PDT

I want to calculate the limite of this function when $x\to\infty$.

$\lim_{x\to\infty}\left(\frac{c+\sqrt{x}}{-c+\sqrt{x}}\right)^x\exp(-2c\sqrt{x})$, where $c$ is a constant.

Numerically, I plot a graphic of this function, and I think the answer is 1. But theoretically, I have no idea how to proceed.

If $G$ is a triangle-free and $\delta(G)$ is large, then $G$ is bipartite graph

Posted: 21 Jul 2022 09:39 PM PDT

Show that every $n$-vertex triangle-free graph with minimum degree greater than $2n/5$ is bipartite.

First of all this result is vacuously true for $n=1,3,5$. For $n=2$ and $n=4$, it is trivially true since the desired graphs are $K_2$ and $K_{2,2}$.

We assume that $n\geq 6$ and $V(G)=\{v_1,\dots,v_n\}$. Since $\deg(v_1)>2n/5$, then take $v_2\in N(v_1)$ and consider $N(v_2)$ also.

It is easy to see that $N(v_1)$ and $N(v_2)$ are disjoint and independent sets. Moreover, $|V(G)\setminus (N(v_1)\sqcup N(v_2))|<n/5$.

  1. Consider the most trivial case, when $V(G)\setminus (N(v_1)\sqcup N(v_2))=\varnothing$. Hence $V(G)=N(v_1)\sqcup N(v_2)$ and $G$ is a bipartite graph with vertex sets $N(v_1)$ and $N(v_2)$ , i.e. $G=G[N(v_1),N(v_2)]$.
  1. If $|V(G)\setminus (N(v_1)\sqcup N(v_2))|=1$, then $V(G)= N(v_1)\sqcup N(v_2)\sqcup \{w\}$. If $N(w)\cap N(v_1)=\varnothing$ or $N(w)\cap N(v_2)=\varnothing$, then $G$ is bipartite graph. Indeed, assume WLOG that $N(w)\cap N(v_1)=\varnothing$, then $G$ is a bipartite with vertex sets $N(v_1)\sqcup \{w\}$ and $N(v_2)$.

Question 1. What if $|N(w)\cap N(v_1)|=k>0$ and $|N(w)\cap N(v_2)|=\ell>0$? I sketched a picture and I see that $G$ is not bipartite in this case. I was wondering how to prove it rigorously?

Question 2. I was wondering is it possible to solve the problem with that approach? It seems a bit difficult to me but I did not checked the details since I stucked in question 1.

Does $\frac{ d v(x,y) }{\mathbb d |x-y|}$ denote the change in $v$ with respect to a change in the difference between $x$ and $y$?

Posted: 21 Jul 2022 09:31 PM PDT

I have a function $v(x,y)$, where $x,y \in \mathbb{R}$.

I would like to turn the following statement into math: "the function $v(x,y)$ is increasing at a decreasing rate as the difference between $x$ and $y$ is increasing."

Is there a simple way to express this mathematically, such as $\frac{ d v(x,y) }{\mathbb d |x-y|} >0$, e.t.c?

Ordered pair of natural numbers such that the geometric mean is exactly $8$ less than the arithmetic mean.

Posted: 21 Jul 2022 09:27 PM PDT

For an ordered pair of natural numbers $(a,b)$, the geometric mean is exactly $8$ less than the arithmetic mean. How many such pairs exist if both numbers are less than $10000 $ and $a>b$ ?

$\sqrt{ab}= \frac{a+b}{2}-8$
$\Rightarrow \sqrt{ab}= \frac{a+b-16}{2}$
$\Rightarrow 2\sqrt{ab}= a+b-16$
$\Rightarrow 16= a+b-2\sqrt{ab}$
$\Rightarrow \sqrt{a}-\sqrt{b} = 4$

Now $a$ and $b$ are natural numbers and the difference between their square roots is a natural number, so what can we conclude from this? I am not able to proceed ahead from here. Please help !!!

Define $g(a) := \liminf_{x \to a} f(x)$ for all $a \in X$. Then $g$ is lower semi-continuous

Posted: 21 Jul 2022 09:16 PM PDT

Let $X$ be a topological space. Then I'm trying to prove below result about a function $g$ derived from $f$.

Theorem: Let $f:X \to \mathbb R \cup \{\pm \infty\}$. For $a \in X$, let $\mathcal N_a$ be the set of all open neighborhoods of $a$. We define a function $g:X \to \mathbb R \cup \{\pm \infty\}$ by $$ g(a) := \liminf_{x \to a} f(x) := \sup _{V \in \mathcal N_a} \inf _{x \in V} f(x) \quad \forall a \in X. $$ Then $g \le f$ and $g$ is lower semi-continuous.

  1. Could you have a check on my attempt?

  2. Is there another simpler proof that uses the criterion "$g$ is l.s.c. iff for all $\lambda \in \mathbb R$, the set $\{x \in X \mid g(x) \le \lambda\}$ is closed in $X$"?


My attempt: Clearly, $g \le f$. Fix $a \in X$. We want to prove $$ g(a) \le \liminf_{x \to a} g(x). $$

Lemma: Let $g:X \to \mathbb R \cup \{\pm \infty\}$. Fix $a \in X$ and $\beta \in \mathbb R$. The following statements are equivalent.

  • $$ \beta \le \liminf _{x \to a} g (x) $$
  • If $(x_d)$ is a net such that $x_d \to a$ and $g(x_d) \to \alpha$, then $\alpha \ge \beta$.

Let $(x_d)_{d\in D}$ be a net such that $x_d \to a$ and $g(x_d) \to \alpha$. By our Lemma, it suffices to show $\alpha \ge g(a)$. For each $(V, n) \in \mathcal N_a \times \mathbb N$,

  • There is $d_1 \in D$ such that $g(x_{d}) \in U_n := (\alpha - \frac{1}{n}, \alpha + \frac{1}{n})$ for all $d \ge d_1$.
  • There is $d_2 \in D$ such that $x_{d} \in V$ for all $d \ge d_2$.
  • There is $d_3 \in D$ such that $d_3 \ge d_2$ and $d_3 \ge d_1$.
  • By this result, there is a net $(y_t)$ such that $y_t \to x_{d_3}$ and $f(y_t) \to g(x_{d_3})$.
  • Clearly, $U_n$ is a neighborhood of $g(x_{d_3})$, and $V$ an open neighborhood of $x_{d_3}$. Then there is $y_{t_0}$ such that $y_{t_0} \in V$ and $f(y_{t_0}) \in U_n$.
  • Let $y_{V, n} := y_{t_0}$.

We endow $\mathcal N_a \times \mathbb N$ with a partial order $\le$ defined by $$ (V_1, n_1) \le (V_2, n_2) \iff (V_1 \supset V_2) \wedge (n_1 \le n_2). $$

Then $(\mathcal N_a \times \mathbb N, \le)$ is a directed set. By construction, and $y_{V, n} \to a$ and $f(y_{V,n}) \to \alpha$. By this result, $\liminf_{x \to a} f(x)$ is the smallest cluster point among all convergent nets $(f(x_t))$ such that $x_t \to a$. This implies $\alpha \ge \liminf_{x \to a} f(x)$. This completes the proof.

Why can't columns of a generalized modal matrix for the same Jordan block be interchanged?

Posted: 21 Jul 2022 10:09 PM PDT

Suppose we have the matrix $$M=\begin{pmatrix} 1 & 0 & 1\\ 1 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$$

Then we can find a Jordan normal form $J$ and generalized modal matrix $P$ such that $M=PJP^{-1}$. These are $$P=\begin{pmatrix} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix} \textrm{ and } J=\begin{pmatrix} 1 & 1 & 0\\ 0 & 1 & 1\\ 0 & 0 & 1 \end{pmatrix}$$

I understand that one can swap the order of Jordan blocks. My question is, why must the generalized eigenvectors of a Jordan chain in the modal matrix appear in order of increasing rank? In other words why couldn't we swap the position of two generalized eigenvectors corresponding to the same Jordan block so that

$$P=\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$$

would also be a valid modal matrix for this problem. I have some intuitive understanding as to why this is wrong, but I'm interested in a more formal explanation/proof.

Getting the area between $y$-axis, $f(x)$, $f(2-x)$ when both function is given by their subtraction?

Posted: 21 Jul 2022 10:03 PM PDT

  • For a polynomial $f(x)$, let function $g(x) = f(x) - f(2-x)$.
  • $g'(x) = 24x^2 - 48x + 50$

What is the area between $y=f(x)$, $y=f(2-x)$, and $y$-axis?

My approach:

  1. $g(1) = f(1) - f(1) = 0$.
  2. From $g'(x)$, $g(x) = 8x^3 - 24x^2 + 50x + C$.
  3. From 1 and 2, $C = -34$.
  4. Since $f(x)$ is a cubic function, let $f(x) = ax^3 + bx^2 + cx + d$ and compare coefficients of $g$ and $f(x) - f(2-x)$.
  5. $a = 4$, $2b + c = 1$.

And I'm stuck. I can't see how I can get more info about $f(x)$ using these conditions and get areas out of it.

What conditions allow one rational sequence to bound another? [closed]

Posted: 21 Jul 2022 10:15 PM PDT

Let $(a_k)$ and $(b_k)$ be infinite sequences of rational numbers with the following properties.

  • For every $k$, $0 \le a_k \le 1$ and $0 \le b_k \le 1$.
  • $a_k$ and $b_k$ are positive infinitely often.
  • $\sum a_k \lt 1$, and that sum is irrational.
  • $\sum b_k = 1.$
  • $(a_k)$ has a known formula for each $a_k$, and $(b_k)$ is to be determined.

Then:

  1. What conditions ensure that $a_k$ is bounded above by $b_k$ for every $k$? In other words, given a formula for $(a_k)$, how can $(b_k)$ be built so that $0\le a_k \le b_k \le 1$ for every $k$?

  2. What additional conditions ensure that $a_k$ is bounded above by $b_k$ for every $k$, when—

    • $(a_k)$ is known to be eventually decreasing, or
    • $(b_k)$ is built to be (eventually) geometrically decreasing, or
    • any combination of these?

For example, if $a_k$ is $1/k!$ if $k\ge 2$ and $k$ is even, and 0 otherwise (these are the coefficients of the Taylor series for $\cosh(x)-1$), then $(b_k)$ can be built as $1/(2^{(n-2)/2+1})$ if $k\ge 2$ and $k$ is even, and 0 otherwise. For a different sequence of $(a_k)$, a different sequence of $(b_k)$ has to be built, and so on (e.g, different $(b_k)$ sequences for the Taylor coefficients of $\exp(x/4)/2$, $\sinh(x)/2$, and $\cosh(x)/2$), thus making it hard to determine whether a given sequence will work.

Motivation:

In my scenario:

  • $(b_k)$ is a probability distribution (that is, $b_k$ is the probability of getting $k$).
  • $\lambda$ is the probability that a biased coin shows heads.
  • $f(\lambda) = \sum_{k\ge 0} a_k \lambda^k$ is a known function.

Then by sampling $X$ from the distribution $(b_k)$ and flipping a coin with probability of heads $\frac{a_X}{b_X} \lambda^X$, we can thus "flip" a new coin whose probability of heads is $f(\lambda)$ — without estimating the probability $\lambda$ directly. But this only works if, among other things, $a_k$ is bounded above by $b_k$ for every $k$ (in other words, the sequence $(a_k)$ can be "tucked" under the probabilities of the discrete distribution, represented by $(b_k)$), and this condition is not always easy to verify.

Efficiently evaluate $\underset{X \gets \mathcal{N}(\mu,\sigma^2)}{\mathbb{E}}\left[\left(1 - s + s \cdot e^X \right)^{\sqrt{-1} \cdot t}\right]$

Posted: 21 Jul 2022 10:14 PM PDT

Let $\mu, \sigma, s, t \in \mathbb{R}$ with $\sigma>0$ and $0 \le s \le 1$. Define $$a_{\mu, \sigma, s, t} := \underset{X \gets \mathcal{N}(\mu,\sigma^2)}{\mathbb{E}}\left[\left(1 - s + s \cdot e^X \right)^{i \cdot t}\right]$$ $$= \frac{1}{\sqrt{2\pi \sigma^2}}\int_{-\infty}^\infty \exp\left(\frac{-(x-\mu)^2}{2\sigma^2} + i \cdot t \cdot \log\left(1-s+s\cdot e^x\right)\right) \mathrm{d}x,$$ where $i^2=-1$.

I would like to be able to efficiently compute the value of $a_{\mu, \sigma, s, t}$ numerically. A closed-form solution seems too good to be true. But I'm hoping for something like a rapidly converging series.

To be a bit more precise, I want a procedure (i.e., implementable on a computer) that, given inputs $\mu,\sigma,s,t,\varepsilon\in\mathbb{R}$, computes $\tilde{a}_{\mu,\sigma,s,t}$ with $|\tilde{a}_{\mu,\sigma,s,t}-a_{\mu,\sigma,s,t}|\le\varepsilon$. The running time of the procedure (assuming basic arithmetic operations take unit time) should be polynomial in $\log(1/\varepsilon)$. An explicit series with exponentially decaying terms would suffice for this. Why do I need such fast runtime? The quantity of interest is exponentially small $|a_{\mu,\sigma,s,t}|\approx\exp(-s^2t^2\sigma^2/2)$, so $\varepsilon$ needs to be exponentially small too (otherwise $\tilde{a}_{\mu,\sigma,s,t}=0$ provides a trivial estimate), but I still want polynomial runtime.


Firstly, $a_{\mu, \sigma, s, t}$ is well defined. It's the expectation of a bounded and continuous function of a Gaussian. In particular, there is an obvious Monte Carlo algorithm for computing this value: Sample $X \gets \mathcal{N}(\mu,\sigma^2)$ and compute $\left(1 - s + s \cdot e^X \right)^{i \cdot t}$; repeat this procedure and average the results. But to get accuracy $\varepsilon$, this algorithm would require roughly $1/\varepsilon^2$ repetitions. I want an algorithm that runs in something like $\log(1/\varepsilon)$ time. Numerical integration methods are a bit faster (roughly $1/\varepsilon$ steps), but that's still not as rapid as I'd like.

There are a few easy special cases:

  • If $t=0$, then $a_{\mu, \sigma, s, t}=1$.
  • If $s=0$, then $a_{\mu, \sigma, s, t}=1$.
  • If $s=1$, then $a_{\mu, \sigma, s, t}=e^{i \cdot \mu \cdot t - \sigma^2 \cdot t^2 / 2}$. (This is just the characteristic function of the Gaussian.)

As a rough approximation $\log(1-s+s\cdot e^x) \approx sx$, whence $a_{\mu,\sigma,s,t} \approx \mathbb{E}[e^{i t s X}] = e^{its\mu - t^2s^2\sigma^2/2}$.


One thing I tried is a binomial series expansion: $$\left(1 - s + s \cdot e^x \right)^{i \cdot t} = \sum_{k=0}^\infty {it \choose k} \cdot (1-s)^{it-k} \cdot s^k \cdot e^{k \cdot x}.$$ But this series only converges if $x<\log(1/s-1)$. In particular, when I try to evaluate this series with a Gaussian $X$, the terms grow exponentially, as $\mathbb{E}[e^{k \cdot X}] = e^{\mu k + k^2 \sigma^2 / 2}$.

Here's another approach: Define $$f(x) := \left(1-s+s\cdot e^x\right)^{i \cdot t} = \exp(i \cdot t \cdot \log(1-s+s\cdot e^x)).$$ Then $a_{\mu,\sigma,s,t} = \mathbb{E}[f(X)]$ for $X \gets \mathcal{N}(\mu,\sigma^2)$. Now let $$\hat{f}(\xi) = \int_{-\infty}^\infty f(x) \cdot e^{-2\pi i \xi x} \mathrm{d}x$$ be the Fourier transform of $f$. By Parseval's, $$a_{\mu,\sigma,s,t} = \mathbb{E}[f(X)] = \int_{-\infty}^\infty \hat{f}(\xi) \cdot \hat{X}(\xi) \mathrm{d}\xi,$$ where $\hat{X}(\xi) = \mathbb{E}[e^{-2\pi i \xi X}] = e^{-2\pi \mu \xi i -2\pi^2 \sigma^2 \xi^2}$ is the Fourier-Stieltjes transform of $X \gets \mathcal{N}(\mu,\sigma^2)$.

Now this would be progress if $\hat{f}$ is easier to work with than $f$. Unfortunately, $\hat{f}$ is not even well-defined because $f$ is not an integrable function. Nevertheless, I do feel like this approach might be salvageable.

Any suggestions for how to approach this would be greatly appreciated!

Evaluating $\int_0^1\frac{\ln^2(1+x)+2\ln(x)\ln(1+x^2)}{1+x^2}dx$

Posted: 21 Jul 2022 09:57 PM PDT

How to show that

$$\int_0^1\frac{\ln^2(1+x)+2\ln(x)\ln(1+x^2)}{1+x^2}dx=\frac{5\pi^3}{64}+\frac{\pi}{16}\ln^2(2)-4\,\text{G}\ln(2)$$

without breaking up the integrand since we already know:

$$\int_0^1\frac{\ln^2(1+x)}{1+x^2}dx=4\,\mathfrak{J}\operatorname{Li}_3(1+i)-\frac{7\pi^3}{64}-\frac{3\pi}{16}\ln^2(2)-2\,\text{G}\ln(2)$$

and

$$\int_0^1\frac{\ln(x)\ln(1+x^2)}{1+x^2}dx=-2\,\Im\operatorname{Li_3}(1+i)+\frac{3\pi^3}{32}+\frac{\pi}8\ln^2(2)-\text{G}\ln(2).$$

We can see that the imaginary parts got cancelled out leaving us only a real value and this fact pushed me to propose such a question. I tried integration by parts and subbing $x\to (1-x)/(1+x)$ but no use.


The two integrals are given in (here) and (here) respectively.

Explain the mechanism behind the "1" in the Present value calculation of money

Posted: 21 Jul 2022 09:39 PM PDT

Explain the mechanism behind the "1" in the Present value calculation of money.

The formula is PV=A/(1+DISCOUNT RATE)^NO.OF YEARS.It would be helpful if someone could explain the 1,and if there's a simple math rule,law, convention,or algebraic expression behind it, please expatiate on that as well. Imagine yourself as explaining to a grade schooler when you do it(unfortunately, my math skills are that bad). Thank you for your time.

Frog-Jumping Out Of Well Word Problem

Posted: 21 Jul 2022 10:04 PM PDT

A frog is jumping out of a well 30 ft. deep. Each day he jumps 3 feet and slips two feet back. How many days does it take the frog to jump up to 30 ft. (out of the well)?

I did this problem long-hand and got 28 days--an answer others got as well. I am wondering if there is some series (Geometric series, etc.) I tried to use a "closed form" Geometric Series, but that does not seem to work. Any other series solutions other than long-hand. Thanks.

Area of a cyclic quadrilateral.

Posted: 21 Jul 2022 09:33 PM PDT

Question:

The distance $SR$ from $PQ$ is 7cm and arc $SR$ is 48cm and arc $SP \cong$ arc $QR$. Then find the area of quadrilateral $SRQP$($PQRS$ are taken in order and $O$ is centre). Question figure

What we(me and my friends) tried:


Approach 1: My work

In construction,$OM\perp PQ$.
Let radius of circle be $r$.
By Pythagoras theorem:
$$SM=MR=\sqrt{r^2-49}$$ $$SR=2\sqrt{r^2-49}$$ $$\text{Area} \triangle SOR =7\sqrt{r^2-49}$$ $$ \text{Area} \triangle SOP=\triangle ROQ=\dfrac{7r}{2}$$ Area of quadrilateral:
$$7r+7\sqrt{r^2-49}$$ Now $\angle OMR=\angle OMS=\theta$
$$\angle SOR=2\theta$$

By using radian arc formula: $$48=r\cdot 2\theta\dfrac{ \pi}{180}$$ $$\theta=\dfrac{4320}{\pi r}$$ In $\triangle OMR$:
$$\cos(\theta)=\dfrac{7}{r}$$ $$\cos\bigg(\dfrac{4320}{\pi r}\bigg)=\dfrac 7r$$ I have no idea how to simplify this.


Approach 2: A-2 Let $MN$ be $x$,$\angle SOR=\theta,\angle ROQ=\angle SOP=\phi$ and $\phi=\dfrac{180-\theta}{2}$
Radius of circle:
$$ON=OM+MN=7+x$$ $$\text{Area}\triangle SOR=\dfrac 12 (7+x)^2 \sin \theta$$ $$\text{Area}\triangle SOP=\text{Area}\triangle ROQ=\dfrac 12 (7+x)^2 \sin \phi$$ $$\text{Area of quadrilateral }PQRS=\dfrac 12 (7+x)^2 \sin \theta+ (7+x)^2 \sin \phi$$ $$=\dfrac 12 (7+x)^2 \sin \theta+ (7+x)^2 \sin \bigg(\dfrac{180-\theta}{2}\bigg)$$

And $$\frac \theta {360}[2\pi(7+x)]= 48$$ Two equations and two variables, so it might be solved( but I was not able to do so ).


How to solve this question?

Thanks!


As per comments, it should be solved by numerical approximation.

Obtaining positive eigenvalues of the matrix $A$?

Posted: 21 Jul 2022 09:46 PM PDT

Let us consider the matrix $A$ which has three parameters $R,C1,C3$. This is from the Ikeda map in real form.

It is defined as $$x \rightarrow R+(x \cos(\tau)-y \sin(\tau))$$ $$y \rightarrow x\sin(\tau)+y\cos(\tau)$$

The Jacobain matrix is given by:

\begin{equation*} A = \begin{bmatrix} \cos(\tau) + x \frac{\partial}{\partial x} \cos \tau - y\frac{\partial}{\partial x}\sin\tau & x\frac{\partial}{\partial y} \cos \tau - \sin \tau - y \frac{\partial}{\partial y} \sin \tau\\ \sin\tau + x \frac{\partial}{\partial x} \sin \tau + y \frac{\partial}{\partial x} \cos \tau & x \frac{\partial}{\partial y}\sin\tau + \cos \tau + y\frac{\partial}{\partial y}\cos \tau \end{bmatrix} \end{equation*} where $$\tau = C_{1} - \frac{C_{3}} {1+x^2+y^2}$$

$x,y$ are solutions of the non-linear equation

\begin{equation} R+x \cos \tau - y \sin \tau = x\\ x\sin \tau + y \cos \tau = y \end{equation}

After calculating the determinant of the matrix $A$, we get $det(A)=1$(so product of eigenvalues is 1) using $$\frac{\partial \tau}{\partial x} = \frac{2C_{3}x}{(1+x^2+y^2)^2}$$ $$\frac{\partial \tau}{\partial y} = \frac{2C_{3}y}{(1+x^2 + y^2)^2}$$

I am wondering for which values of $R,C_{1},C_{3}$ I can obtain positive eigenvalues? as I see after trying many values I get complex eigenvalues or negative eigenvalues.

I am thinking whether the above matrix can have any positive eigenvalues at all?

Any sharp hawk eye observations to this?

EDIT -

If suppose $R=0$, then we see that $x=0,y=0$ satisfies the non linear equation and if we obtain the trace of the matrix at $(0,0)$ we get the trace as $2\cos \tau$ and now for the eigen values to be real and positive we need $\cos \tau > 1$ which is not possible so we can eliminate the $R=0$ case. Now I am thinking whether if for $R \neq 0$, can we have positive eigenvalues for the Jacobian matrix?

what does wedge or carrot mean in a matrix context

Posted: 21 Jul 2022 09:42 PM PDT

While reading about coordinate transformations. I came across this

$\Omega^{\gamma}_{\beta,\alpha}=[\omega^{\gamma}_{\beta,\alpha}\wedge]$

What does the caret (or wedge) mean? In the book it looks more like a caret than a wedge.

image from source

Taken from Groves, Paul D Principles of GNSS, Inertial, And Multisensor Integrated Navigation Systems 2nd ed p. 45.

Thank you in advance.

Are all eigenvectors, of any matrix, always orthogonal?

Posted: 21 Jul 2022 09:35 PM PDT

I have a very simple question that can be stated without proof. Are all eigenvectors, of any matrix, always orthogonal? I am trying to understand Principal components and it is cruucial for me to see the basis of eigenvectors.

No comments:

Post a Comment