Friday, April 15, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Does dynamical system has volume preserving property in basin of attraction?

Posted: 15 Apr 2022 07:11 AM PDT

I have a question on the basin of attraction: Does the dynamic flow on every bounded region inside of a basin of attraction has volume preserving property?

Combining two equations to get a nicer form

Posted: 15 Apr 2022 07:11 AM PDT

How can I combine the 2 equations given below to get a nicer form?

For $\vec{v}, \vec{w} \in \mathcal{R}^d$ and $a \in \mathcal{R}$,

$0 \leq \langle \vec{v}, \vec{w} \rangle < -a$
$0 < - \langle \vec{v}, \vec{w} \rangle \leq a$

Which course fits my specialization in optimization and machine learning better?

Posted: 15 Apr 2022 07:09 AM PDT

Right now I am doing my masters in mathematics. I am specializing in optimization (discrete and non-linear) and machine learning. I have a last course which I have to choose and I am not quite sure which would benefit me more in my situation.

I have to decide between:

Numerical Linear Algebra

Measure theoretic probability theory

Like all mathematicians I had a compulsory course "Introduction to probability theory" before and measure theoretic probability theory is kind of the next (optional) step in the direction of stochastics.

After my masters I am going into industry.

Farey Sequence and Mertens function

Posted: 15 Apr 2022 07:09 AM PDT

Mertens function $M(n)$ is defined as the cumulative sum of Möbius functions $\mu(k)$: $$M(n)=\sum_{k=1}^n\mu(k)$$ and is profoundly related to the Riemann hypothesis. A nice alternative formula (though not very usefull computational wise) uses the Farey sequence $F_n$ of completely reduced fractions to write $$M(n)=-1+\sum_{k\in F_n}\mathrm{e}^{2\pi\mathrm{i}k}.$$ During a work on Fermi surfaces I came across the following sum in two dimensions: $$\sum_{\vec{k}\in F_n\times F_n}\!\!\!\!\mathrm{e}^{2\pi\mathrm{i}\vec{k}\cdot\vec{x}}=\biggl(\sum_{k_x\in F_n}\mathrm{e}^{2\pi\mathrm{i}k_xx}\biggr)\biggl(\sum_{k_y\in F_n}\mathrm{e}^{2\pi\mathrm{i}k_yy}\biggr).$$ I was wondering if someone knows a relation (if there is one...) between my goal sum and Mertens function, i.e. generalizations of the Farey sum definition of $M(n)$?

Calibrating a pinhole camera (finding $z_0$)

Posted: 15 Apr 2022 07:08 AM PDT

A pinhole camera is a very simple theoretical device for generating perspective images on a plane that a distance $z_0$ from the pinhole (a point) and whose normal vector is the direction vector at which the camera is pointing. Using standard simple related frames, we can relate the world coordinate system with a local camera coordinate system attached to the camera.

If the world coordinates of a point are given by $P$ and the local coordinates of its image are $Q$ then

$P = P_0 + t R Q $

where $P_0$ are the world coordinates of the pinhole point, $R$ is the rotation matrix specifying the orientation of the axes of the local coordinate frame of the camera. Vector $Q$ has the form $Q = [x, y, z_0]$ where $z_0$ is the constant parameter specific to the camera specified above. To completely specify the camera, $z_0$ has to be known.

Suppose you have a camera with an unknown $z_0$, then the question is how to find it using a single image of a point with known coordinates, from a camera location $P_0$ that is known, and a known orientation $R$.

My Solution:

From the camera equation above, everything is known, except for the scalar $t$ and the third coordinate of $Q$ which is $z_0$.

From the equation we have

$ P - P_0 = t R Q $

The left side is a known vector. Applying $R^T $ to both sides

$R^T (P - P_0) = t Q = [ t x, t y , t z_0 ]^T$

Since the left hand side is known, then we can solve for $t$ from the first or second coordinate, and then from the third coordinate we can solve for $z_0$.

Is there a mathematical expression for a variable that oscillates either side of a fixed value?

Posted: 15 Apr 2022 07:11 AM PDT

I have a testing algorithm that produces an output number. And that process requires many individual tests. And I'm trying to learn more maths :-)

If the thingie under test is working correctly, the output oscillates randomly but closely either side of 1. So it may be say 1.0002 or 0.999995 e.t.c. $ X \approx 1 $ comes to mind, but that seems quite slack as it implies $X$ might always fall to one side of 1. Might $ \forall X, ~ X = 1 \pm \delta ~\text{where} ~\delta \to 0 $ be correct? I know that in maths 'tending to zero' means really quite small, but in my case $ \delta \nless 10^{-8} $ only. Hmm, not sure about $\pm$ as that might still imply one sidedness.

Is there any stricter notation for oscillating variables, or do I need to use words?

Is there a version of Poincare recurrence theorem on attracting set?

Posted: 15 Apr 2022 07:00 AM PDT

In this paper, on the left-side on page 2, it has a statement of poincare recurrence theorem, which I quoted here:

Our algorithm relies on the Poincaré recurrence theorem, which states that a trajectory on an attracting set will sooner or later visit the same regions of the state space.

I found it is difficult to find which version of poincare recurrence theorem match this statement. The poincare recurrence theorem stated in many textbooks in ergodic theory and dynamical system is about volume preserving system, and the recurrence property applies to any bounded region in the phase space. I quote the definition from the book Mathematical methods of classical mechanics from V.I. Arnold here:

Let $g$ be a volume-preserving continuous one to one mapping which maps a bounded region $D$ of euclidean space onto itself: $gD=D$. Then in any neighborhood $U$ of any point of $D$ there is a point $x\in U$ which returns to $U$, i.e. $g^nx\in U$ for some $n>0$.

Could someone help to clarify what the first statement of poincare recurrence is about? maybe there exists a version of the theorem that specialized for attracting set?

How to maximize $F(A')=\sum\limits_{b\in B}\max\limits_{a\in A'}f(a,b)$?

Posted: 15 Apr 2022 06:59 AM PDT

Given finite sets $A$ and $B$, a function $f : A \times B \to \mathbb{R}$ and a positive integer $N$, how to find a subset $A'\subseteq A$, where $|A'|\le N$, that maximizes the following function? $$F(A') := \sum\limits_{b\in B}\max\limits_{a\in A'}f(a,b)$$

Finding Boundary Value Formulation of PDE from Variational Formulation

Posted: 15 Apr 2022 06:55 AM PDT

I have the following Variatonal Formulation:

Find $u ∈ H^1_0(-1,1) $ such that

$\int_{-1}^{1} u'v' = v(0) $ for any value of $v ∈ H^1_0(-1,1)$


Whats causing me issues is how to treat the $v(0)$, typically we want the test function, $v$, to disappear in the BV Formulation. It is straight forward to write the left side of the equation in inner product form,

$\int_{-1}^{1} u'v' = <∇u,∇v> $ or equivalently, using green's identity $<-Δu, v>$

Then ideally I turn the right hand side of the equation into inner product format with either $<,∇v>$ or $<,v>$ to remove the test function $v$ from the Boundary Value formulation.

I was trying to use $v(0) = v(x) - \int_{-1}^{1} v'(x) dx $

I can then find $\int_{-1}^{1} ∇u∇v + ∇v dx = v(x) $ and I believe the left hand side can be written as $<∇u,∇v> + <1,∇v>$ (the $1$ is strange, but I'm not sure how else to write it). Furthermore to get the right hand side $v(x)$ into inner product form, I wrote it as $\int_{-1}^{1} ∇v dx$

Following this logic I can write the BV formulation as $∇u + 1 = ∇u$, which yields $1 = 0$, and clearly I've made a mistake.

Anybody know where I went wrong?

A question about rank of some matrix

Posted: 15 Apr 2022 06:55 AM PDT

Let $P$ be a $s \times s$ matrix and $Q$ be a $s \times r$. Assume that $rank \, (P|Q)=s$, can we find a $r \times s$ matrix $R$ such that $rank \, (P+QR)=s$ ?

A question on power series

Posted: 15 Apr 2022 06:54 AM PDT

Given a power series $f(z)=\displaystyle{\sum_{n=0}^\infty} c_nz^n$, then there is an associated power series $|f|(z):=\displaystyle{\sum_{n=0}^\infty} |c_n|z^n$. We know that $f$ and $|f|$ have the same circle of convergence. Generally, can we say more about them? For example, whether $|f|$ has more convergent points or more singular points on the circle of convergence?

A similar question is that if $|f_1|=|f_2|$, then are there some general connections between them?

If (0,0), (a,0) and (a+b, c) are three vertices of a parellelogram, taken in order then fourth vertex is

Posted: 15 Apr 2022 06:50 AM PDT

Options

A: (a,b)  B: (b,c)  C: (a,c)  C: (a+b,c)  

I tried to apply mid-point theorem, I got an answer of (0,0). It isn't a valid option

Continuity of an integral with parameter-dependent domain of integration

Posted: 15 Apr 2022 06:59 AM PDT

I have an integral of the form: $$g(s)=\int_{D(s)}f(x)dx,$$ where $x\in\mathbb{R}^n$, $s\in\mathbb{R}$, $D(s)$ is a parameter-dependant subset of $\mathbb{R}^n$, $f$ is a "nice" function (i.e. $f\in\mathcal{C}^{\infty}$) and the integral converges for all $s$.

I want to understand if $g(s)$ is continuous, in particular in the case where $D(s)$ is a compact and convex set with non-zero measure for all $s$. All results I was able to find on the topic (e.g. https://encyclopediaofmath.org/wiki/Parameter-dependent_integral) consider a parameter-dependent integrand, with a constant domain of integration. I tried to transform the problem with: $$\int_{D(s)}f(x)dx = \int_{\mathbb{R}^n}f(x)\chi_{D(s)}(x)dx=\int_{\mathbb{R}^n}h(x,s)dx.$$ However, $h(x,s)$ is not continuous in $s$, meaning that I cannot apply any of the results I found.

If it can help, I am familiar with the concept of hemicontinuity of set-valued functions, which intuitively might be related to this issue (maybe hemicontinuity of $D(s)$ implies continuity of $g(s)$?).

Anybody knows any related results? Thanks!

Solving a Fourth-Order Linear Homogeneous Differential Equation

Posted: 15 Apr 2022 06:46 AM PDT

I like to solve the ordinary fourth order homogeneous differential equation given by

$\displaystyle \frac{d^{4}\theta}{d z^{4}} + \lambda \cdot \theta = 0$

with a constant coefficient $\lambda$. Using the characteristic equation, I come up with the following roots

$r_1=-\sqrt[4]{-\lambda}, r_2=\sqrt[4]{-\lambda}, r_3=-i\sqrt[4]{-\lambda}, r_4=i\sqrt[4]{-\lambda}$

By putting the roots into the general solutions you get the following solution

$\displaystyle \theta{\left(z \right)} = C_{1} e^{- z \sqrt[4]{- \lambda}} + C_{2} e^{z \sqrt[4]{- \lambda}} + C_{3} sin\left(z \sqrt[4]{- \lambda}\right) + C_{4} cos\left(z \sqrt[4]{- \lambda}\right)$

(Here is already the first problem, I am not sure if the solution is correct).

For the specific solution only two boundary conditions are given: $\theta(0)=C, \theta(z \rightarrow \infty)=0$

Assuming $C<0$, the authors come up with the following solution

$\theta(z) = C e^{\left(-z \sqrt[4]{4\lambda)} \right)} cos(z \sqrt[4]{4 \lambda})$.

I don't understand how to obtain this solution based on the boundary conditions. Nor can I understand where the 4 under the root comes from. I am grateful for any help. Thank you!

If $-1 \le x \le 1$, what is the maximum value of $x+\sqrt{1-x^2}$? (Cannot use calculus method)

Posted: 15 Apr 2022 07:11 AM PDT

If $-1 < x < 1$, what is the maximum value of $x+\sqrt{1-x^2}$? (Cannot use calculus method)

As stated in the problem, I can't use calculus. Therefore, I'm using things I've learnt so far instead:

One of the things I have tried most successfully is using trig substitutions...

For example, if I substitute $x = \sin \phi$, this yields $\sin \phi + \cos \phi$

But what should I do next? Or there are any methods else to solve the problem?

Group Ring/Algebra and Functions of Positive Type

Posted: 15 Apr 2022 07:08 AM PDT

Given a group $G$, we can form its complex group ring/algebra $\Bbb{C}[G]$, which either be described as the set of all finite formal sums of group elements with complex coefficients or as all the functions $G \to \Bbb{C}$ with finite support. Question:

Given a function $f : G \to \Bbb{C}$ of finite support, what does it mean to say that $f$ is of positive type? And given that description, is it possible to work out what means when we view $f$ as a finite formal sum of group elements with complex coefficients; i.e., what does that say about the group elements in the sum and what does it say about the complex coefficients?

For reference, see the definition of $\Bbb{k}_r[\Gamma]_+$ on page 4 of this.

Is the Fourier Basis a complete basis of L^2 if we consider functions that differ on a set of Lebesgue-measure 0 to be distinct?

Posted: 15 Apr 2022 06:47 AM PDT

I am currently working on replicating some results from the following paper:

https://arxiv.org/abs/1803.00798

https://onlinelibrary.wiley.com/doi/abs/10.1002/jae.2846

In it, the authors define the square-integrable functions somewhat unconventional in that they consider two functions that differ on a set of Lebesgue-measure zero to be distinct. So effectively, we are not looking at the equivalence classes but the functions themselves.

If I understand the paper correctly, the authors later make use of the fact that the Fourier basis is a complete orthogonal basis of the space of square-integrable functions when approximating a functional integral over $L^2$. My question is, whether this way of defining $L^2$ creates a problem with the desired properties of the Fourier basis. Sadly, my theoretical knowledge in this area is somewhat lacking and I would be glad to have any advice.

[I originally asked this question on CrossValidated, but I was told that this is probably the more appropriate place.]

Combining product and chain rule in derivative

Posted: 15 Apr 2022 07:06 AM PDT

Can someone confirm whether the following derivative is correct?

I want to find the derivative with respect to S of:

$S*\exp((b-r)*T) * N(x_1) $ where $x_1 = \frac{\ln(S/X)}{\sigma \sqrt(T)} + (1+\mu)\sigma\sqrt(T)$

What I found:

$\frac{\partial}{\partial S} S*\exp((b-r)*T) * N(x_1) = \frac{\partial}{\partial S} S*\exp((b-r)*T) * \frac{1}{\sqrt(2 \pi)} * \exp(-\frac{x^2}{2})$

Plugging in $x_1 = \frac{\ln(S/X)}{\sigma \sqrt(T)} + (1+\mu)\sigma\sqrt(T)$ yields

$\frac{\partial}{\partial S} S*\exp((b-r)*T) * \frac{1}{\sqrt(2 \pi)} * \exp(-\frac{(\frac{\ln(S/X)}{\sigma \sqrt(T)} + (1+\mu)\sigma\sqrt(T))^2}{2})$

Now, using the productrule where

$f(S) =S*\exp((b-r)*T) * \frac{1}{\sqrt(2 \pi)}$

and

$g(S) = \exp(-\frac{(\frac{\ln(S/X)}{\sigma \sqrt(T)} + (1+\mu)\sigma\sqrt(T))^2}{2}) $

we can find the derivative by $f'(S)g(S)+f(S)g'(S)$.

$f'(S) = \exp((b-r)*T) * \frac{1}{\sqrt(2 \pi)}$

$g'(S) = \exp(-\frac{(\frac{\ln(S/X)}{\sigma \sqrt(T)} + (1+\mu)\sigma\sqrt(T))^2}{2}) * \frac{1}{S\sigma \sqrt(T)} $ (where I used the chainrule)

Thus, $\frac{\partial}{\partial S} = f'(S)g(S)+f(S)g'(S) = \exp((b-r)*T) * \frac{1}{\sqrt(2 \pi)} * \exp(-\frac{(\frac{\ln(S/X)}{\sigma \sqrt(T)} + (1+\mu)\sigma\sqrt(T))^2}{2}) \\ + S*\exp((b-r)*T) * \frac{1}{\sqrt(2 \pi)} * \exp(-\frac{(\frac{\ln(S/X)}{\sigma \sqrt(T)} + (1+\mu)\sigma\sqrt(T))^2}{2}) * \frac{1}{S\sigma \sqrt(T)} $

Baby Rudin Theorem 7.12

Posted: 15 Apr 2022 06:58 AM PDT

Rudin's statement is as follows.

Theorem 7.12 If $\{f_n\}$ is a sequence of continuous functions on $E$, and if $f_n\to f$ uniformly on $E$, then $f$ is continuous on $E$.

This very important result is an immediate corollary of Theorem 7.11.

Theorem 7.11 Suppose $f_n\to f$ uniformly on a set $E$ in a metric space. Let $x$ be a limit point of $E$, and suppose that $$\lim_{t\to x}f_n(t)=A_n (n=1,2,3,...).$$ Then $\{A_n\}$ converges, and $$\lim_{t\to x}f(t)=\lim_{n\to \infty}A_n.$$ In other words, the conclusion is that $$\lim_{t\to x}\lim_{n\to \infty}f_n(t)=\lim_{n\to \infty}\lim_{t\to x}f_n(t).$$

Here is my thought:

If $p$ is a limit point of $E$ and $p\in E$, then I can prove $f$ is continuous at $p$ by Theorem 4.6 & Theorem 7.11

Theorem 4.6 In the situation given in Definition 4.5 (i.e. Suppose $X$ and $Y$ are metric spaces, $E \subset X$, $p\in E$, and $f$ maps $E$ into $Y$.), assume also that $p$ is a limit point of $E$. Then $f$ is continuous at $p$ if and only if $\displaystyle\lim_{x\to p}f(x)=f(p)$

I know $f_n$ is continuous on $E$, then $f_n$ is continuous at $p$, according to Theorem 4.6, $\displaystyle\lim_{x\to p}f_n(x)=f_n(p)(n=1,2,3,...).$

I know $f_n\to f$ uniformly on $E$, then according to Theorem 7.11 and above process, $\{f_n(p)\}$ converges, and $$\lim_{x\to p}f(x)=\lim_{n\to \infty}f_n(p).$$ Assume that $f(p)=\displaystyle\lim_{n\to \infty}f_n(p).$ Then $$\lim_{x\to p}f(x)=f(p)$$ According to Theorem 4.6, $f$ is continuous at $p$.

Here is my question:

Now I think I should prove that with the assumption given by Theorem 7.12, $f$ is continuous at which the isolated points in E. Then Theorem 7.12 is correct. But how to prove this?

By the way, I think many people have other simple ways to prove this theorem, can somebody give me some ideas? Thank you.

can we say that any 2D shapes or plane figures are also a curve in geometry

Posted: 15 Apr 2022 06:50 AM PDT

Definition of curve is "A curve is a shape or a line which is smoothly and continuously drawn in a plane having a bent or turns in it". So according to the definition of curve, can we say that any 2D shapes or plane figures are also a curve in geometry?

Find the minimum $n$ such that $x^3-\frac{2\sqrt2+1}{2}x^2+\frac{2\sqrt2+1}{2}x-1\mid x^n-1$

Posted: 15 Apr 2022 06:54 AM PDT

Set $f(x)= x^3-\frac{2\sqrt2+1}{2}x^2+\frac{2\sqrt2+1}{2}x-1$. Next I found the roots of this polynomial that $x=1$ and $\frac{1}{2}(\frac{2\sqrt2 -1}{2} \pm i \frac{\sqrt7+1}{2})$. Here the $n$ is a natural number.

But I can't proceed the next step. What should I do to find the minimal value of the $n$ without any just calculation (long division)? The method not finding roots also welcomed.

Proving the AM-GM Inequality by induction

Posted: 15 Apr 2022 06:46 AM PDT

I am required to prove the AM-GM inequality using induction but via this route:

(i) Let a1, a2, ..., an be a sequence of positive numbers. Denote their sum by s and their geometric mean by G. Let a and b be two terms in the sequence such that a>G>b. Show that replacing a by G and b by ab/G does not alter the geometric mean, and the sum does not increase.

(ii) Use (i) and mathematical induction to prove s ≥ nG.

Part (i) is simple, and I am able to prove part (ii) using repeated substitution (similar to a method described in Wikipedia), but I am required to prove (ii) using induction instead. How can this be done using part (i) and using induction? Thank you!

Details in the computation of $H_k ( \mathbb{R}^n, \mathbb{R}^n - \{0\} )$ for $k=1,0$

Posted: 15 Apr 2022 07:06 AM PDT

During the proof of Theorem 2.26 (Brouwer's theorem on invariance of dimension) in his book, Hatcher claims that "$H_k ( \mathbb{R}^n , \mathbb{R}^n - \{ 0 \} )$ is $\mathbb{Z}$ for $k=n$ and $0$ otherwise". I'm perfectly fine with the argument that $ \mathbb{R}^n - \{ 0 \} $ has the same homology as $ S^{n-1} $, and I am also aware of the homology groups of spheres.

I then started to develop the relevant computations on my on, but stumbled upon some details that left me puzzled. They all came up in the following segment of the long exact sequence of relative homologies:

$$ H_1 ( \mathbb{R}^n - \{0\} ) \xrightarrow{ (i_1)_{\ast} } H_1 ( \mathbb{R}^n ) \xrightarrow{ (j_1)_{\ast} } H_1 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \xrightarrow{ \partial_1 } H_0 ( \mathbb{R}^n - \{0\} ) \xrightarrow{ (i_0)_{\ast} } H_0 ( \mathbb{R}^n ) \xrightarrow{ (j_0)_{\ast} } H_0 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \xrightarrow{ \partial_0 } 0 $$ which reads, for every $n$, using that $\mathbb{R}^n$ is path-connected and contractible: $$ H_1 ( \mathbb{R}^n - \{0\} ) \xrightarrow{ (i_1)_{\ast} } 0 \xrightarrow{ (j_1)_{\ast} } H_1 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \xrightarrow{ \partial_1 } H_0 ( \mathbb{R}^n - \{0\} ) \xrightarrow{ (i_0)_{\ast} } \mathbb{Z} \xrightarrow{ (j_0)_{\ast} } H_0 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \xrightarrow{ \partial_0 } 0 $$ Now, the way things are written in Hatcher suggests his claim follows easily and solely from the properties of exact sequences, for every possible case and every $k \neq n$. But this last segment is always a little thorny for me.

1) First, when $n \geq 2$, $S^{n-1}$ is path-connected, and we can further simplify: $$ H_1 ( \mathbb{R}^n - \{0\} ) \xrightarrow{ (i_1)_{\ast} } 0 \xrightarrow{ (j_1)_{\ast} } H_1 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \xrightarrow{ \partial_1 } \mathbb{Z} \xrightarrow{ (i_0)_{\ast} } \mathbb{Z} \xrightarrow{ (j_0)_{\ast} } H_0 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \xrightarrow{ \partial_0 } 0 $$ but this didn't take me very far using just algebra (but I'm very new to this). Instead, I tried to explicitly understand the morfism $(i_0)_{\ast}$, induced by the inclusion $i_0 : C_0 ( \mathbb{R}^n - \{0\} ) \to C_0 ( \mathbb{R}^n )$:

Let $\sigma$ be a 0-cycle (i.e. a formal sum of points) in $\mathbb{R}^n - \{0\}$ such that $\sigma$ is the boundary of a 1-chain $\sum_{i} n_i \alpha_i$ in $\mathbb{R}^n$ (in other words, $\sigma$ represents a class in $\ker (i_0)_{\ast}$). Then, as long as the dimension of the ambient space is $n \geq 2$, any $\alpha_i$ that meets $0$ can be replaced by a 1-chain $\tilde{\alpha}_i$ such that $\partial_1 \tilde{\alpha}_i = \partial_1 \alpha_i$, but the image of $\tilde{\alpha}_i$ avoids $0$. Thus, $\sigma$ is also a boundary in $\mathbb{R}^n - \{0\}$, and therefore trivial in $H_0 ( \mathbb{R}^n - \{0\} )$.

The above reasoning, if correct, tells me that $(i_0)_{\ast}$ is injective. In this case, yes, I can conclude using exactness + the isomorphism theorem that $H_1 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \approx 0$, but I wonder: is there any way to conclude this in general, solely from the structure of the diagram?

2) When $n=1$, my rudimentary argument above obviously breaks down. Also, $S^0$ is no longer path-connected, so we have: $$ 0 \xrightarrow{ (j_1)_{\ast} } H_1 ( \mathbb{R}, \mathbb{R} - \{0\} ) \xrightarrow{ \partial_1 } \mathbb{Z} \oplus \mathbb{Z} \xrightarrow{ (i_0)_{\ast} } \mathbb{Z} \xrightarrow{ (j_0)_{\ast} } H_0 ( \mathbb{R}, \mathbb{R} - \{0\} ) \xrightarrow{ \partial_0 } 0 $$ We now wish to prove that $H_1 ( \mathbb{R}, \mathbb{R} - \{0\} ) \approx \mathbb{Z}$, so it definitely can't be the case that $(i_0)_{\ast}$ is injective again (or this homology group would be trivial). This left me wondering: was it indeed injective at 1), or is that reasoning flawed, and both conclusions follow from some purely algebraic argument? Otherwise, how do we conclude this case?

3) I know from exactness that $(j_0)_{\ast}$ must be surjective, but I have no intuition whatsoever on why should this imply $H_0 ( \mathbb{R}^n, \mathbb{R}^n - \{0\} ) \approx 0$. Any hint in this direction would be appreciated, but I feel that this part of the reasoning is related to understanding 1) and 2), so that I can also use information from $(i_0)_{\ast}$.

To give some context: I started studying homology some days ago with the intention to write some paragraphs in my thesis about orientability in a topological setting, so I was basically following the shortest possible path towards it in Hatcher's book, although I have (not very wisely) skipped reduced homology.

Thanks in advance

Various approximations to a peaked distribution

Posted: 15 Apr 2022 06:49 AM PDT

I am given a probability function as follows (see also this MO question for context and various areas which this function appears):

$$p_n = \frac{e^{-a / n - b n}}{\sum_{n = 3} e^{-a / n - b n}}, \quad (1)$$

where $a$ and $b$ are some constants and $n = 3, 4, 5, \ldots$ (in practice, the upper bound of $n$ is finite). It is also assumed that the expectation value of $n$ is $6$, i.e., $\langle n \rangle = \sum_{n = 3} n \, p_n = 6$.

Now, I would like to obtain a relation between the variance of $n$, i.e., $\sigma^2 = \langle (n - \langle n \rangle)^2 \rangle = \sum_{n = 3} p_n (n - 6)^2$, and $p_6$. I am also given the additional information that $p_n$ is peaked around $n = 6$. Thus, one can approximate $p_n$ around $n = 6$ as a Gaussian, that is: $P_n = 1 / \sqrt{2 \pi \sigma^2} \exp [ -(n - 6)^2 / (2 \sigma^2)]$, and obtain:

$$\sigma^2 P_6^2 = \frac{1}{2 \pi}. \quad (2)$$

However, as $p_6$ approaches $1$, the width of the distribution goes to $0$, thus one has:

$$\sigma^2 + p_6 \to 1 \quad \text{as} \quad p_6 \to 1. \quad (3)$$

As a result, without using the specific form of $p_n$ in $(1)$, we have obtained two relations, i.e., $(2)$ and $(3)$, which relate $p_6$ and $\sigma^2$, merely based on the fact that $p_n$ is peaked around $ n = 6$. Now, in order to obtain the regions of validity of the above two approximations, one can study $p_n$ in $(1)$ directly, subjected to $\langle n \rangle = 6$, by numerical analyses and obtain:

$\sigma^2$ vs. $p_6$

In the above two plots, the red points are obtained directly from $(1)$, while $\langle n \rangle = 6$ is imposed as a constraint. The left panel shows that the approximation $(2)$ which is depicted as an orange line is very good for $0.20 \leq p_6 < 0.66$ and the brown line which depicts approximation $(3)$ is valid for $0.66 < p_6 < 1$. The right panel illustrates that neither $(2)$ nor $(3)$ is valid for $p_6 < 0.20$.

I have two questions:

1. Is there any neat relation for $\sigma^2$ vs. $p_6$, for the values of $p_6$ less than $0.20$ (the right panel in the above figure), which seems in this region the normal approximation to $p_n$ around $n = 6$ doesn't work anymore? (It seems now the tail of $p_n$ plays a significant role, although $p_n$ is still peaked at $n = 6$.)

2. How can one analytically obtain the regions of the validity of the above two approximations, $(2)$ and $(3)$, directly from $p_n$?

Prove that $F(x)=\int_{x_{1}^{0}}^{x_1}\cdots \int_{x_{n}^{0}}^{x_n} f(y) \, dy_1 \cdots dy_n$ is continuous

Posted: 15 Apr 2022 07:04 AM PDT

Let $\Omega$ be a open subset of $\mathbb{R}^{n}$, $f \in L_{\operatorname{loc}}^{1}(\Omega)$ and $x^0=(x_{1}^{0},\dots,x_{n}^{0})$ an arbitrary point of $\Omega$. Define $$F(x)=\int_{x_{1}^{0}}^{x_1}\cdots \int_{x_{n}^{0}}^{x_n} f(y) \, dy_1 \cdots dy_n.$$

My question: How to prove that $F$ is a continuous function in a sufficiently small neighborhood of $x^0$?

I started trying to prove this statement in the case $\Omega=\mathbb{R}$. So, let $x_0 \in \mathbb{R}$ and $(x_n) \subset \mathbb{R}$ such that $x_n \rightarrow x_0$ in $\mathbb{R}$. Let $M>0$ such that $-M<x_n<M$ and $-M<x_1^0<M$ for all $n \in \mathbb{N}$. Then, for $x^0=x_{0}^{1}$ $$F(x_n)=\int_{x_1^0}^{x_n}f(y)dy=\int_{\mathbb{R}} 1_{(x_1^0,x_n)}(y)f(y)\,dy. \tag{*}$$ Then, define $g_n(y)=1_{(x_1^0,x_n)}(y)f(y)$. We have that $|g_n(y)|\leq 1_{(-M,M)}|f(y)|=g(y)$ and $g \in L^1(\mathbb{R})$. If we prove that $$1_{(x_1^0,x_n)} \rightarrow 1_{(x_1^0,x_0)} \hbox{ a.e. in } \mathbb{R},$$ the result follows from the Dominated Convergence Theorem. (This is also a point that I have not been able to prove.)

It's just weird to write (*) this because it can happen $x_{n_0-1}<x_1^0$ and $x_{n_0}>x_1^0$ for some $n_0 \in \mathbb{N}$.

PS: This question comes from Corollary 1, page 263 in Trèves book Topological Vector Spaces, Distributions and Kernels. enter image description here

enter image description here

Inverse of spectral theorem for symmetric matrices?

Posted: 15 Apr 2022 06:52 AM PDT

I have another interesting problem. I intuitively think this is false.

If matrix $A \in M_n(\mathbb{R})$ has $n$ distinct eigenvalues and $n$ orthogonal eigenvectors $q_1, q_2, q_3,\dots, q_n$. Then $A$ is symmetric matrix!


This is kind of inverse of spectral theorem. Is it enough to use spectral theorem or does inverse doesn't follow? I can't make up any matrix that has there properties but it's not symmetric.

Does $SO(n)$ lie in any $(n^2-1)$-dimensional subspace of $\mathbf R^{n^2}$?

Posted: 15 Apr 2022 06:48 AM PDT

The matrix group $SO(n)$ can be treated as a submanifold of $\mathbf R^{n^2}$. Does it lie in any $(n^2-1)$-dimensional subspace of $\mathbf R^{n^2}$?

For $n=2$ the answer is yes because $SO(2)$ lies in the span of the identity matrix and $\begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}$. How about for $n>2$? Thanks.

Is the matrix $A = I - \frac{J}{n+1}$ idempotent?

Posted: 15 Apr 2022 06:48 AM PDT

I am supposed to figure out if the following statement is false or true.

If $I$ is the $n \times n$ identity matrix, and $J$ is an $n \times n$ matrix consisting entirely of ones, then the matrix $$A = I - \frac{J}{n+1}$$ is idempotent (i.e., $A^{2} = A$).


I understand obviously what $I$ and $J$ are, my issue is with the $A = I - \frac{J}{n+1}$. I searched my textbook and found no reference to it. What does it mean?

Determine the concentration and normalization of a fuzzy set

Posted: 15 Apr 2022 07:03 AM PDT

could you help determine this? Concentration and Normalization of a Fuzzy set A as given below : A = {car I 0.5, truck I 0.9, bus I 0.7, scooter 10, bike I 0.2}

Regards

Find all primes $p,q$ such that $p^3+p=q^7+q$

Posted: 15 Apr 2022 07:04 AM PDT

The following has been unanswered in art of problem solving and other forums for months.

Find all primes $p,q$ such that

$p^3+p=q^7+q$

One solution is $(5,2)$ and it has been computer checked that there is no other solution till $10^7$.

I am really curious to see a complete solution.

No comments:

Post a Comment