Monday, February 21, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Solving system of coupled differential non linear equations using finite difference method?

Posted: 21 Feb 2022 12:22 PM PST

I am trying to model a tubular reactor using the finite difference method based on the following governing equations:

$$ -V_z \dfrac {\partial C_A}{\partial z} + D \dfrac {\partial ^2 C_A}{\partial z^2} - k_1C_AC_B = 0$$

$$ -V_z \dfrac {\partial C_B}{\partial z} + D \dfrac {\partial ^2 C_B}{\partial z^2} - k_1C_AC_B- k_2C_BC_C = 0$$

$$ -V_z \dfrac {\partial C_C}{\partial z} + D \dfrac {\partial ^2 C_C}{\partial z^2} + k_1C_AC_B -k_2C_BC_C = 0$$

$$ -V_z \dfrac {\partial C_D}{\partial z} + D \dfrac {\partial ^2 C_D}{\partial z^2} + k_2C_BC_C = 0$$

With simulation parameters of:

$$L=10; D=10^{-4}; k_1=k_2=1;$$ $$C_{A0}=C_{B0}=1; C_{C0}=C_{C0}=0 $$

However, after discretisation of the system, I am unsure of how to solve the system due to the non linear reaction terms in each equation (i.e. $k_1C_AC_B$ etc.).

Are there suggestions to help solve this?

How to write an open interval in $R^{2}$ or higher

Posted: 21 Feb 2022 12:28 PM PST

I want to check my understanding on writing intervals in terms of set notation in higher dimensions. What I am meaning is for example $(a,b) \subset \mathbb{R} = \{x \in \mathbb{R}: a<x<b \}$

So would it be the following as subsets of higher dimensions of $\mathbb{R}$?

$(a,b) \subset \mathbb{R}^{2} = \{(x,y) \in \mathbb{R}^{2}: a<x<b, y=0\}$

$(a,b) \subset \mathbb{R}^{3} = \{(x,y,z) \in \mathbb{R}^{3}: a<x<b, y =0, z=0\}$

I wasn't sure if the final example should be

$\{(x,y,z) \in \mathbb{R}^{3}: a<x<b, a<y<b, z=0\}$. (just by pattern matching the $\mathbb{R}^{2}$ case).

A singular variant of the OEIS sequence A349576

Posted: 21 Feb 2022 12:07 PM PST

I refer to a previous post of mine in which it is defined the recurrence relation (now OEIS sequences A349576 and A349982) $$ x_{n+1} = \frac{x_{n} + x_{n-1}}{(x_{n},x_{n-1})} + c\;\;\;\;\;\;\;(1)$$ where $(x_{n},x_{n-1})$ is the gcd of $x_{n}$ and $x_{n-1}$.

This recurrence relation often generates periodic sequences with periods depending on the value of the integer constant $c$.

I wondered what dynamics I would have obtained by eliminating the dependence on such constant. One of the most interesting behaviors is shown by the following relation $$ x_{n+1} = \frac{x_{n} + x_{n-1}}{(x_{n},x_{n-1})} + (x_{n},x_{n-1})\;\;\;\;\;\;\;(2)$$ One might expect that the character (periodic or non-periodic) of the sequences generated by relation (2) depends on the choice of the initial conditions $x_0$ and $x_1$. But things are a little different.

I made some numerical experiments. The following screenshot shows the results obtained.

The image consists of a $21$ x $21$ matrix $T$ in which:

  • $x_0$ is the row index;
  • $x_1$ is the column index;
  • $t_{\,x_0,\,x_1}$ represents the period of the sequence generated by the recurrence relation $(2)$ starting with the initial conditions $x_0,\,x_1$ (the symbol "+" stands for a probably non-periodic sequence).

enter image description here

Up to $21$ x $21$ only $3$ possible periods appear:

  • $2901$, with a peak value of $2269429312765395470820$ $(\sim 2.27\cdot 10^{21})$

enter image description here

  • $155$, with a peak value of $513582$

enter image description here

  • $3$, with a peak value of $60$ (e.g. $13,18,32,27,60,32,27,60,32,...$).

Will different values appear for the period by extending the search scope?

And why do the three indicated values ($2901, 155, 3$) appear and not others?

Smooth function represented by an infinite series

Posted: 21 Feb 2022 12:06 PM PST

Consider the sum $\sum_{n=0}^{\infty}e^{-n}sin(n^2x).$ I want to show that this sum represents a smooth real-valued function and identify its divergent points. As it is given, it doesnt have good representation however once one expands $sin(n^2x)$ into series and use Fubini theorem to interchange the sums, we obtain a somewhat nice looking power series at hand. Showing that coefficients actually converge, we can associate a function $f$ such that if $a_n/n!$ is the coefficient $x^n$ in the representation, $a_n=f^{(n)}(0)$ that is $C^{\infty}(\mathbb R)$ via Borel's theorem (it's called Borel's lemma for instance on wikipedia).

So my confusion is that can we say that such $f$ is represented by the sum I've mentioned? Or what does it mean that a function is represented by a sum. Since $f$ is not unique and we used this sum to find a particular function such that at $x=0$ the series gives the Taylor expansion of the function, which doesnt necessarily has to be equal to the value of the function. But the sum I gave seem to converge. So from this very limited knowledge how can we deduce that such $f$ is necessarily not analytic when $x\not= 0$? Or even talk about the radius of converge of its series representation? There are alot of choices of $f$ we can choose in this manner, and the only knowledge we have is very limited.

Why do we need so many axioms about sets?

Posted: 21 Feb 2022 12:05 PM PST

Sorry if this is a silly question. I just don't understand why we need axioms such as axiom of specification/replacement to ensure the existence of some sets. Can't we just use singleton axiom (given an object, there is a set containing this object) and the union operator to construct many other sets?

special equivalence rules

Posted: 21 Feb 2022 12:05 PM PST

� ((E ∨ (H ∨ I)) ↔ ((E ∨ H) ∨ I))

� ((C ∧ D) ↔ (D ∧ C))

Proof of the Strong Markov Property Morters and Peres

Posted: 21 Feb 2022 11:58 AM PST

I am currently working through "Brownian Motion" by Morters and Peres and I am confused by something in their proof of the Strong Markov Property (Theorem 2.16) (the text can be found here: https://people.bath.ac.uk/maspm/book.pdf).

They define $B_k=\{B_k(t):t\geq 0\}$ where $B_k(t)= B(t+k/2^n)-B(k/2^n)$. In the middle of the proof they say $P\{B_k\in A\}=P\{B\in A\}$ and that this follows from the simple Markov property (theorem 2.3) but I cannot see why $B_k$ and $B$ have the same distributions. I know that $B_k$ is a Brownian motion starting at $0$ and that $B(t+k/2^n)-B(k/2^n) \sim N(0,t)$ so I believe that $P(B_k\in A) = P(B\in A)$ if the Brownian motion was assumed to be a standard Brownian motion, but that is not the case here. It seems to me that if $B(0)=x \in \mathbb{R}^d$ then $P(B_k\in A) = P(B-x\in A)$ (where $B-x = \{B(t)-x :t\geq 0\})$. I feel like I may not be understanding the simple Markov property properly.

Any clarification is greatly appreciated.

Deviation lines in Poincaré - disk model

Posted: 21 Feb 2022 12:09 PM PST

The question has been asked before I guess; two geodesics are given in Poincaré disk model say $K=-1$ cut by a transversal $(T_1T_2)$.

enter image description here

Please help find common yellow area and perimeter enclosed between two geodesic lines assuming the deviatric non- geodesic (center, radius ) is defined.

I could not find how the latter are exactly defined in any model.

Kenmotsu - rotational surfaces

Posted: 21 Feb 2022 12:05 PM PST

I was wondering if someone knows which is the parameterization that we made in these cases. I calculated till $$Z(s)=\left(\frac{1}{2iH}(1-e^{-2iHs})+C\right)e^{2iHs},$$ but can't figure out what parameterization was done after that!

image

Is $\frac{1}{\zeta(1+it)}$ bounded as $t \rightarrow \infty$?

Posted: 21 Feb 2022 11:57 AM PST

Let $\zeta$ denote the Riemann zeta function. My question: is $\frac{1}{\zeta(1+it)}$ bounded as $t \rightarrow \infty$ ?

The motivation of my question: It seems if the answer is yes, then there exists a sequence of zeros of $\zeta$ whose real parts converge to $1$.

Indeed, let $\Theta \leq 1$ be the supremum of the real parts of the zeros of $\zeta$. Assume that $\frac{1}{\zeta(1+it)}$ is ubounded as $t \rightarrow \infty$, suppose that $\Theta <1$ and let $c>\Theta$. Denote by $\mu$ the Mobius function and define $A(x)=\sum_{n\leq x} \frac{\mu(n)}{n^c}$. Note that for $\Re(s)>\Theta$, one has

$$\frac{1}{\zeta(s)} = \sum_{n=1}^{\infty} \frac{\mu(n)}{n^s} = \sum_{n=1}^{\infty}\frac{\mu(n)}{n^c}n^{c-s} = (s-c)\int_{1}^{\infty} A(x)x^{c-s-1} \mathrm{d}x. \tag{1}$$

Since $c>\Theta$ and $M(x):=\sum_{n\leq x} \mu(n) \ll x^{\Theta + \varepsilon}$ for any fixed $\varepsilon>0$, it follows by partial summation that

$$\sum_{n>x} \frac{\mu(n)}{n^c} \ll \int_{x}^{\infty} |M(t)|t^{-c-1} \mathrm{d}t \ll x^{\Theta+\varepsilon-c} \tag{2}$$ hence

$$A(x) = \sum_{n=1}^{\infty} \frac{\mu(n)}{n^c} - \sum_{n>x} \frac{\mu(n)}{n^c} = \frac{1}{\zeta(c)} - \sum_{n>x} \frac{\mu(n)}{n^c} = \frac{1}{\zeta(c)} + O(x^{\Theta+\varepsilon-c}). \tag{3}$$ Choose $\varepsilon$ so that $\Theta+\varepsilon<1$. Inserting (3) into (1) yields

$$\frac{1}{\zeta(s)} = \frac{1}{\zeta(c)} + O\Big(\frac{s-c}{s-\Theta-\varepsilon} \Big) \tag{4} $$

for $\Re(s)>\Theta+\varepsilon$. We can therefore put $s=1+it$ where $t \in \mathbb{R}$, into (4) and obtain

$$\frac{1}{\zeta(1+it)} = \frac{1}{\zeta(c)} + O(1). \tag{5}$$

But by our assumption, the left-hand side of (5) is unbounded as $t \rightarrow \infty$, so we reach a contradiction. This implies that our supposition must be false hence $\Theta$ must be equal to $1$. However, this extremely unlikely to be true, since we believe on the Riemann hypothesis that $\Theta=0.5$.

Functional Analysis assignment.

Posted: 21 Feb 2022 12:01 PM PST

I'm stucking in following problem. It seems very easy to work it out but can not find any idea to start with. I' wondering we could find a solution for that by brainstorming and sharing our ideas. All helps are much appreciated.

Let X be a Banach space and let f : X --> [0; inf) be a function with the following properties:

(i) it is lower semicontinuous, in the sense that the set {x∈X : f(x)>t} is open for every t 2 [0; inf);

(ii) it is d-homogeneous, in the sense that f(λx) = |λ|^d f(x) for every λ ∈ C and every x ∈ X;

(iii) there is a constant C > 0 such that f(x + y) ≤ C max{f(x), f(y)} for every x, y ∈ X.

Then Prove that there is a constant K > 0 with the property that f(x) ≤ K|x|^d for x ∈ X (|x| stands for norm of x); x ∈ X: [Hint: Use the Baire category theorem.]

Exponentiating limit ordinals

Posted: 21 Feb 2022 11:57 AM PST

I have been reading about ordinal arithmetic and I came across this definition for ordinal exponentiation, when the exponent is a limit ordinal and the base isn't. Here is the definition: $$\begin{array}{lcl} \alpha^0 &= & 0' \\ \alpha^{\beta'} &= &(\alpha^\beta) \times \alpha \\ \alpha^{\lambda} &= & {\bigcup \{\alpha^\beta : \beta < \lambda\} \text{ ($\lambda$ a limit ordinal)}} \end{array}$$ The definition I am talking about is the last one. I was wondering if there is a similar definition for when the base is also a limit ordinal. Any help on this would be greatly appreciated.

Use mathematical induction to show that eˣ > 1 + ⁿ∑ᵣ xʳ/r! [duplicate]

Posted: 21 Feb 2022 11:50 AM PST

Using Mathematical Induction, prove that : $e^{x}>1+x+{x^2\over 2!}+{x^3\over 3!}+...+{x^n\over n!}$

if $x>0$ and n is any positive integer

My approach

Let $f(x)=e^x-1-x-{x^2\over2!}-{x^3\over 3!}-...-{x^n\over n!}$

Then need to find ${d\over dx}f(x)$ and apply the inequality $f'(x) > 0$

$f'(x)=e^x-1-x-{x^2\over2!}-{x^3\over 3!}-...-{x^{n-1}\over (n-1)!}$

$f'(x)=f(x)+{x^n\over n!}$

I don't know how to proceed further. Any suggestions, hint would be appreciated.

What does $D^tf$ mean?

Posted: 21 Feb 2022 12:00 PM PST

Let $V$ be the vector space of all polynomial functions on the real field and let $f$ be the linear functional defined by

$$f(p)=\int_{0}^{1}p(x)dx.$$

If $D$ is the differential operator over $V$, what does $D^t(f)$ mean?

As we know, $f:V\to\mathbb{R}$ and $D:V\to V$, so $D^t$ is well defined by means of composition with itself over $V$. However, given that $f\in V^*$, I don't know how to understand the symbol $D^{t}f$.

Prove or disprove that the subspace topology on $\{\frac{1}{n}:n\in\mathbb{N}_+\}\subset \mathbb{R}$ is the discrete topology.

Posted: 21 Feb 2022 12:10 PM PST

Say I have $A = \lbrace \frac{1}{n} | n \in \mathbb{N}_+ \rbrace \subset \mathbb{R}$

I'm trying to figure out if the subspace topology on $A$ that is inherited from $\mathbb{R}$ is the discrete topology.

For the subset $A$ of $\mathbb{R}$, the subspace topology on $A$ is defined by

$\mathcal{T}_A:=\lbrace A \cap U \mid U \in \mathcal{T}\rbrace$ where $\mathcal{T}$ is the standard topology on $\mathbb{R}$.

The largest topology on a topological space $X$ contains all subsets as open sets and is called the discrete topology. In the discrete topology, every point in $X = \mathbb{R}$ is an open set.

In other words, we have to figure out if the subspace topology on $A$ has open set subsets for every point. Disproving the initial statement would be finding a point where this is not true, if one exists.

We know $A\subset \mathbb{R}$. Then the standard topology inherited from $\mathbb{R}$ should be the topology whose open sets are the unions of sets of the type $(a,b)\cap A$, with $a,b\in\mathbb{R}$ and $a<b$.

The issue I'm having is figuring what an open set would look like here. I've been told that some subset $X$ is open in $\mathbb{R}$ if it meets the definition of the topology on $\mathbb{R}$, but $A$ is more than one open interval $(a,b) \subset \mathbb{R}$. Is there a test to apply to $A$ generally?

On the approximate Birkhoff orthogonality

Posted: 21 Feb 2022 12:02 PM PST

In an inner product space $(X,\langle \cdot \vert \cdot \rangle)$, an approximate orthogonality with $\varepsilon \in [0,1)$ is defined as follows: $$x‎\perp‎^{‎\varepsilon‎}y ‎\quad‎ ‎\Leftrightarrow ‎‎\quad ‎\vert‎ ‎‎‎\langle x ‎\vert‎ ‎‎y ‎‎‎‎\rangle ‎\vert‎ ‎\leq ‎\varepsilon ‎\Vert ‎x‎\Vert ‎‎\Vert ‎y‎\Vert‎‎ ‎‎.$$ On the other hand, in a normed space $(X,\Vert \cdot \Vert )$, an approxmiate Birkhoff orthogonality with $\varepsilon \in [0,1)$ is defined as follows: $x‎\perp‎_B^{‎\varepsilon‎} y$ iff $$\forall ‎‎\lambda ‎\in ‎‎\mathbb{K}\; : ‎\; ‎‎\Vert ‎x+‎\lambda ‎y‎\Vert^2 ‎\geq ‎‎‎\Vert ‎x‎\Vert‎^2 ‎-2‎\varepsilon ‎‎\Vert ‎x‎\Vert ‎‎\Vert ‎‎\lambda ‎y‎\Vert ‎.$$ How can I show that $\perp^{\varepsilon}_B =\perp^{\varepsilon}$ in an inner product space?

Showing the limit of the discrete category defines the usual product.

Posted: 21 Feb 2022 12:10 PM PST

How can we see that the categorical limit of the diagram $F$ from the discrete 2 object category to sets is isomorphic to the usual product $\{(x,y):x\in X, y\in Y\}$ defined elementwise on sets. Certainly if im$(F)=\{X,Y\}$ then $X\times Y$ is the apex of a cone over $F$ using the projection map.

I think the correct approach should be to consider sets strictly larger, smaller or equal in size to $X\times Y$ because that would allow us to construct (or perhaps fail to construct) maps through $X\times Y$ but I'm not sure how to formalise this.

Is the $n$-cycles factorization in $S_n$ as product of $n-1$ transpositions unique?

Posted: 21 Feb 2022 12:16 PM PST

I'm studying Graph Theory, and I have a question. I'm trying to prove that the product of $n-1$ transpositions in $S_n$ is an $n$-cycle if and only if the associated graph of $n$ vertices ($i, j$ adjacent if and only if $(i j)$ is one of those transpositions) is a tree.

I think the proof assumes that the factorization of an $n$-cycle in $n-1$ transpositions always has the same structure, such as $(x_1 x_2)(x_1 x_3) \cdots (x_1 x_n)$, so the first entry of each permutation is always the same. I mean, those absolutely work for any $n$-cycle, and for any $x_1 \in \{1, \ldots, n\}$, but is it a given that those transpositions need to be like that?

Like, can't the first entry of each transposition be different than the others?

Localization of Cohen-Macaulay module of finite projective dimension at non-maximal prime ideal

Posted: 21 Feb 2022 12:08 PM PST

Let $(R,\mathfrak m)$ be a local Gorenstein domain of dimension $2$. Let $M$ be a finitely generated $1$-dimensional module with projective dimension $1$. Then by Auslander-Buchsbaum formula,

$\mathrm{depth}(M)=2-\mathrm{pd}(M)=1=\dim M$. Hence, $M$ is a Cohen-Macaulay module.

My question is:

Must it be true that $M_{\mathfrak p}$ is not free for some prime ideal $\mathfrak p\ne \mathfrak m$ ?

Evaluate the uniform and the normal convergence of the series of functions $\sum \frac{x}{(1 + x^2)^n}$

Posted: 21 Feb 2022 11:56 AM PST

A series of functions with the general term $ f_n(x) $ defined as:
$$ f_n(x) = \frac{x}{(1 + x^2)^n} \space , \space x \geq 0 $$

  1. Evalute the uniform and normal convergence of the series $\sum_{n = 1}^{+ \infty} \frac{x}{(1 + x^2)^n} $ on $\mathbb{R}^+$
  2. Prove that the series have normal convergence on $[a, + \infty[$ with $ a > 0 $
  3. Is the series uniformally convergent on $[0,1]$?

We have for all $ x \geq 0 $:

$$ f_n'(x) = \frac{(1 + x)^{n - 1}[1 + (1 - 2n)x^2]}{(1 + x^2)^{2n}} = 0 \iff x = \frac{1}{\sqrt{2n - 1}} $$

and for all $ x \geq 0 $ :

$$ | f_n(x) | \leq f_n( \frac{1}{\sqrt{2n - 1}} ) = \frac{\sqrt{2n - 1}}{2^n (2n - 1) (1 + n)^n} \leq \frac{1}{2^n} $$

Since $ \sum \frac{1}{2^n} $ is convergent, then $ \sum f_n $ have normal convergence on $[0, + \infty[$.

  1. From this question, I think that $ \sum f_n $ have normal convergent on the interval $[a, + \infty[$ only, and not on $ [0, + \infty[ $

  2. From what I found in question 1, the series have normal convergence on $ [0, + \infty[ $, which implies uniform convergence on the interval $ [0,1] $

What am I missing? Thank you.

The infimum of discrete prozess with random variable as index

Posted: 21 Feb 2022 11:54 AM PST

Consider the time discrete stochastic process $X_t = t - \sum_{i=1}^t \xi_i$, with $t \in \mathbb{N}$ and let $\xi_i$ has bernoulli(p) distribution, $P(\xi=0)=1-p$ and $P(\xi = 2)=p$. Let $\eta_z$ has geometric distribution with Parameter $z$. My question is which distribution follows $\underline{X}_{\eta_z}$ For Notation of infimum we write $\underline{X_t}=inf_{0\leq s \leq t} X_t$. I want to Calculate $E[b^{\underline{X}_{\eta_z}}]$. I tried to discretize the Khella-Whitt Martingale but, this is not possible. Thank you for tipps in advance.

What is the functional related to the problem $v^{\prime\prime} +v =G^{\prime}(v+a) -a$?

Posted: 21 Feb 2022 12:19 PM PST

Since I need to study variationally the problem $$v^{\prime\prime} +v =G^{\prime}(v+a) -a,\qquad v(0)=v(T)=0$$ with $a\in\mathbb{R}$ and $G^{\prime}$ is a suitable function, I need to write down its associated functional.

I guess it should be something related to $$J(v) =\frac12\int v^2 dx-\int G(v+a)dx +a\int v^2 dx,$$

but I don't know how becomes the part related to $v^{\prime\prime}$.

Could someone please help me with that?

Thank you in advance.

Double integral $ \int_{0}^{\alpha}dx\int_{0}^{\alpha}dy\frac{\ln|x-y|}{\sqrt{xy}}$

Posted: 21 Feb 2022 12:10 PM PST

How do I show the following: $$ \int_{0}^{\alpha}dx\int_{0}^{\alpha}dy\frac{\ln|x-y|}{\sqrt{xy}}=4\alpha(\ln(\alpha)+2\ln(2)-3) $$ where $\alpha>0$. The integral has arisen as I've been studying the singularity of the Hankel function near the origin (which behaves as a logarithm).

Thanks in advance for any help.

Refinement of $X=\{a,b,c\}$

Posted: 21 Feb 2022 12:15 PM PST

Let $X=\{a,b,c\}, T=\{\emptyset,X,\{a\},\{a,b\},\{a,c\}\}$. Is it correct to say that $\{X\}$ is an open refinement of every open cover of $X$?

How can I derive the mean and the variance of the Cox-Ingersoll-Ross model by Ito’s lemma?

Posted: 21 Feb 2022 12:13 PM PST

According to Wikipedia and some other articles I've seen the mean of the CIR model : $dr_t=a(b-r_t)dt+σ\sqrt r_t dW_t$

Is the following : $E(r_t) =r0e^{-at}+b(1-e^{-at})$

And the variance : $ Var(r_t)=r0\frac{σ^2}{a}(e^{-at}-e^{-2at})+\frac{bσ^2}{2a}(1-e^{-at})^2$

I have tried to derive it with Ito's lemma by taking a function $r_te^{at}$ and solving this function with the Ito lemma formula however my mean and variance results were very different. I'm preparing for an exam and my professor only wants us to solve these stochastic differentials with Ito's lemma

I'm super frustrated I can't come to the correct answer. Could anyone more experienced with stochastic calculus enlighten me ? Thanks :)

  • I have seen a similar question on here however it wasn't solved by itto's lemma

My approach was:

  • to take the function $x=e^{at}r_t$

And use the Itos lemma formula:

$e^{at}dr_t +ae^{at}dt$

Now substituting the original CIR model $r_t$ into drt $abe^{at}dt+ae^{at}r_tdt+e^{at}σ\sqrt r_t dW_t+ae^{at}r_t dt$

$ \int_{0}^{t} abe^{at}\,ds + \int_{0}^{t} ae^{at}r_t\,ds + \int_{0}^{t} e^{at}σ\sqrt r_t dW_t\,ds $

E(X) = $x0+2e_te^{ab}+be^{at}-ab-2ar_t+ \int_{0}^{t} e^{at}\sqrt r_t dW_t\,dt$

V(X) = $[x0+2e_te^{ab}+be^{at}-ab-2ar_t+ \int_{0}^{t} e^{at}\sqrt r_t dW_t\,dt]^2$

My second approach was taking the function $e^{bt}r_t$ I followed the same procedure , after taking ittos lemma , $be^{bt}r_t dt + e^{bt}dr_t$ And substituting drt with the CIR equation

$be^{bt}r_t dt+ ab e^{bt}dt+ ar_te^{bt}dt+e^{bt}σ\sqrt r_t dW_t$

$\int_{0}^{t} be^{bt}r_t \,ds + \int_{0}^{t} ab e^{bt} \,ds + \int_{0}^{t} ar_te^{bt} \,ds + \int_{0}^{t} e^{bt}σ\sqrt r_t dW_t \,ds$

$E(x) = x0 +(e^{bt}-1)rt+ ae^{bt}-a + \frac{ar_te^{bt}}{b}-ar_t + \int_{0}^{t} e^{bt}σ\sqrt r_t dW_t \,ds $

$V(x) = [x0 +(e^{bt}-1)rt+ ae^{bt}-a + \frac{ar_te^{bt}}{b}-ar_t + \int_{0}^{t} e^{bt}σ\sqrt r_t dW_t \,ds]^2 $

Sample Pearson correlation coefficient

Posted: 21 Feb 2022 11:54 AM PST

Given paired data $\left\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\right\}$ consisting of $n$ iid pairs ($x_i$ and $y_i$ are indenpendent), $r_{xy}$ is defined as:

$$ r_{xy}={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{{\sqrt {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}{\sqrt {\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}}}}}$$

where $Ex_1=\mu_1$, $Ey_1=\mu_2$, $Var [x_1]=\sigma_1^2$, $Var [y_1]=\sigma_2^2$.

What is the limiting distribution of $\frac{\sqrt{n-2} \, r_{xy}}{\sqrt{1-r_{xy}^2}}$.

Torus bundles and compact solvmanifolds

Posted: 21 Feb 2022 11:54 AM PST

Let $ \Gamma $ be a group which fits into a SES $$ 1 \to \mathbb{Z}^n \to \Gamma \to \mathbb{Z}^m \to 1 $$ then must we have $$ \pi_1(M) \cong \Gamma $$ for some $ n+m $ dimensional compact solvmanifold $ M $?

A similar question: Is every torus bundle over a torus a solvmanifold? In other words, if we have a fiber bundle $$ T^n \to M \to T^m $$ then can we conclude that $ M $ is a solvmanifold?

This is true for $ n=m=1 $. The only extension of $ \mathbb{Z} $ by $ \mathbb{Z} $ are $ \mathbb{Z}^2 $ and the Klein bottle group. And indeed the compact 2d solvmanifolds are exactly $ T^2 $ and the Klein bottle $ K $. These two are also the only circle bundles over the circle.

For dimension 3 I'm a bit less sure but I think all the bundles $$ T^1 \to M \to T^2 $$ are solvmanifolds. And I know that many of the bundles $$ T^2 \to M \to T^1 $$ are solvmanifolds.

standard error formula for Bernoulli distribution

Posted: 21 Feb 2022 12:17 PM PST

I am confused about the formula here about standard error.

I know that standard error of the sample average Y_bar should be an estimator of the standard deviation of the sampling distribution $\bar Y$. This should be based on the case where the population std is unknown. The formula should be SE[$\bar Y$] = sample standard deviation / sqrt(n).

However, i am considering the case where i know i am drawing samples from a Bernoulli distribution; this has two cases.

  1. I know it is Bernoulli, but I don't know its real p(probability of success). Then I shouldn't know its variance. Then the methods to compute SE should be using the formula above. For example, Y1 = 0, Y2 = 1, Y3 = 1. Then sample variance is 1/2 * [(0 - 2/3) ** 2 + 2*(1-2/3)**2) = A. so the sample SE should be $\sqrt{\frac{A}{n}}$, which is different from $\sqrt{\frac{\bar Y*(1 - \bar Y) }{ n} }$. (which formula is correct?)

  2. I know it is Bernoulli and I know its real p. Then I should know the sample variance to be p * (1-p) / n, the sample standard deviation is the square root of sample variance. Then we probably don't even need to estimate sample standard error. As it is the same as sample standard deviation.

However, the following statement makes me confusing: When Yi are iid draws from a Bernoulli distribution with success probability p, the variance of $\bar Y$ should be p * (1-p)/n, and SE[$\bar Y$] is $\sqrt{\frac{\bar Y * (1-\bar Y) }{ n}}$. I am not sure, why when computing SE, we don't use p, but use $\bar Y$ instead.

Does Surjective-infinite implies Dedekind infinite

Posted: 21 Feb 2022 12:02 PM PST

Suppose we define a set $A$ to be Surjective-infinite iff there is an surjection $f:A \rightarrow A$ such that it is not an injection. Is this true that $A$ is also dedekind-infinite, i.e there exist a $f^*$ from $A$ to $A$ that is injective but not surjective.

Conceptually, using AC if need be. I could use the given $f$ to construct $f^*$ by sort of 'removing' the duplicates from the pre-image of $a$ under $f$, meaning if $f(c)=f(b)=a,$ define $f^*(a)$ to just be $b$.

If this is true, how do I do it more concretely ?

Cheers

Design a Turing machine which computes the sum of two numbers in base $2$

Posted: 21 Feb 2022 12:05 PM PST

Question:

Design a Turing machine which computes the sum of two numbers in base $2$. For example, If the input is $110x1$, The machine should return $000x111$ (which means: $6+1=7$)


My try:

At each iteration, decrease the first number by $1$, and increase the second one by $1$. Repeat this until the first number becomes $0$. (I know how to increase and decrease a number in base $2$ using a Turing machine)


Problem:

(I) I don't know what to do if the input is like $111x001$. My algorithm is not general enough to solve this one too.
(II) It seems like the numbers should have the same number of digits. But I'm not sure of that. Can we design the machine without presuming this condition?

No comments:

Post a Comment