Sunday, November 7, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


what does m<|x| imply?

Posted: 07 Nov 2021 07:47 PM PST

I know $|x|\le M \ \implies -M\le x \le M$ but what does $m\le|x|$ imply?

Can I just plainly interpret this as $m\le x$?

and if I say $m\le x \le M$ does this still mean $-M \le x \le M$?

Riemann surfaces from weierstrass's idea (as a complete analytic function)

Posted: 07 Nov 2021 07:46 PM PST

Currently I am reading some textbooks on Riemann surfaces, mainly Donaldson's book. In Donaldson's book (and some other quite newly wrote book), most Riemann surfaces seem constructed in a "algebraic curve" style or some quotient space style, which seems quite geometric.

But in some older books like Ahlfors's book Complex Analysis or George Springer's introduction to riemann surfaces, they give a constructive view of Riemann surfaces via "germ/sheaf of analytic functions" or the complete analytic. In this way it seems quite easier to handle some "transcendental" function like $w=\sqrt{z} + \log(z)$.

And some related questions I found:

So maybe my question may be sumarized to:

  1. Is there any newly textbook view Riemann surfaces comprehensively from this point? (both books of Ahfors and George Spring are quite old)
  2. Why most modern textbook on Riemann surfaces does not discuss this view anymore? How do we handle Riemann surface like $w=\sqrt{z} + \log(z)$ in a modern view?
  3. Is it because that the complete analytic function is quite hard to do computation? I do understand it's hard to use power series to do analytic continuation. But it seems not that hard to do the "instructive" analytic continuation.

Sorry for the question being kind of soft, thanks!

A condition for openess on a topology

Posted: 07 Nov 2021 07:44 PM PST

Let X be a set with topology with $\tau$ Suppose that A $\subset$ X such that for all a $\in$ A there is an open subset $U_{a}$ containing a and contained in A. Prove that A is open.

I think that A can be written as the union of all a's in A, thus is contained in the union of $U_{a}$'s. But that alone does not prove that A is open. What in addition do I need to prove such a statement?

If $\mathrm{Aut}(G \times H) \cong \mathrm{Aut}(G) \times \mathrm{Aut}(H)$, then is $\mathrm{gcd}(\left| G \right|, \left| H \right|) = 1$?

Posted: 07 Nov 2021 07:33 PM PST

I know that the converse is true, i.e., if $\mathrm{gcd}(\left| G \right|, \left| H \right|) = 1$, then $\mathrm{Aut}(G \times H) \cong \mathrm{Aut}(G) \times \mathrm{Aut}(H)$. It is said here.

My question is: Is the converse true? I.e., If $\mathrm{Aut}(G \times H) \cong \mathrm{Aut}(G) \times \mathrm{Aut}(H)$, then is $\mathrm{gcd}(\left| G \right|, \left| H \right|) = 1$?

There is no work done for this and this is just a curious question.

$\sum_{n=0}^{\infty} {n \choose y} p^{y+1}(1-p)^{2n-y}$

Posted: 07 Nov 2021 07:30 PM PST

I am stuck in finding the sum of $\sum_{n=0}^{\infty} {n \choose y} p^{y+1}(1-p)^{2n-y}$.

The sum looks quite similar to a negative binomial sum but I can't really find the exact form. Can anyone help?

Ito's Lemma and Ansatz

Posted: 07 Nov 2021 07:12 PM PST

I've been studying stochastic calculus on my own for a while and stumped on some things that I don't fully understand:

Let's start with the geometric brownian motion:

\begin{align} dS_t = \mu S_t dt + \sigma S_t dW_t \end{align}

To solve it for $S_t$ I need to find an equation $f(t,S_t)$ that works with the Itô's Lemma:

\begin{align} df(t,S_t) = \frac{\partial}{\partial t}f(t,S_t)\,dt +\frac{\partial}{\partial S}f(t,S_t)\,dS_t + \frac{1}{2}\frac{\partial^2}{\partial S^2}f(t,S)\,(dS_t)^2 \end{align}

At every place that I look I see that $f(t,S_t) = log(S_t)$, but I don't understand why. Is this just an ansatz?

If I change my SDE to a more generalized short rate model:

\begin{align} dS_t = (\alpha + \beta S_t) dt + \sigma S_t^\gamma dW_t \end{align}

I'll need to find another ansatz? And if I find a function $f(t,S_t)$ that works, I can simply do some algebra and get an expression for $S_t$?

Finding a basis of $T_pS$ where $S = \{(u,v,f(u,v)): u,v\in \Bbb R\}$ is a surface

Posted: 07 Nov 2021 07:37 PM PST

I am trying to prove the following result about tangent spaces to surfaces in $\Bbb R^3$. $T_pS$ denotes the set of all tangent vectors at $p\in S$.

Suppose $S = \{(u,v,f(u,v)): u,v\in \Bbb R\}$ is a surface in $\Bbb R^3$ where $f:\Bbb R^2\to\Bbb R$ is a differentiable map. Prove that $(1,0, f_u)$ and $(0,1,f_v)$ form a basis of $T_pS$, the tangent space at $p\in S$. Here, $f_u = \frac{\partial f}{\partial u}$ and $f_v = \frac{\partial f}{\partial v}$.


Here are some useful definitions:

Definition $1$. $S\subset \Bbb R^3$ is called a regular surface if for any $p\in S$, there exists an open set $U\subset \Bbb R^3$ containing $p$, an open set $V\subset \Bbb R^2$, and a differentiable$\color{red}{^1}$ map $\varphi: V\to U$ such that:

  1. The restriction map $\varphi:V\to U\cap S$ is a homeomorphism.
  2. For all $(x,y)\in V$, $D\varphi_{(x,y)}: \Bbb R^2\to\Bbb R^3$ (the derivative map of $\varphi$) is injective.

Definition $2$. $v\in T_p(S)$ iff there exists $\epsilon > 0$, and a differentiable map $c:(-\epsilon,\epsilon)\to\Bbb R^3$ such that $c(t)\in S$ for all $t\in (-\epsilon,\epsilon)$, $c(0) = p$ and $c'(0) = v$.

Let $p(u,v) = (u,v, f(u,v))$. Then, I noticed that $\frac{\partial p}{\partial u} = (1,0,f_u)$ and $\frac{\partial p}{\partial v} = (0,1,f_v)$. The result above claims that these are basis vectors of $T_pS$, which is a subspace of dimension $2$ in $\Bbb R^3$. However, I am not sure why partial differentiation yields the basis vectors in this case. It would be helpful if I could get some hints on how to go ahead. Thanks!


Footnotes:
$\color{red}{1.}$ $\varphi$ is differentiable in the sense that if $\varphi(u,v) = (\mathbf x(u,v), \mathbf y(u,v), \mathbf z(u,v))$ then $\mathbf x, \mathbf y, \mathbf z: \Bbb R^2\to\Bbb R$ have partial derivatives of all orders.

Why the normal trace operator for H(div;) is surjective? I think proof is wrong..

Posted: 07 Nov 2021 07:03 PM PST

prerequisite: For $\underline{q} \in H(\operatorname{div}, \Omega)$,we can define $\left.\underline{q} \cdot \underline{n}\right|_{\Gamma} \in H^{-\frac{1}{2}}(\Gamma)$ and $$ \int_{\Gamma} \underline{q} \cdot \underline{n} v d \sigma=\int_{\Omega} \operatorname{div} \underline{q} v d x+\int_{\Omega} \underline{q} \cdot \operatorname{grad} v d x \ \ \ \ \ \forall v\in H^{1}(\Omega) $$

Then in the textbook, it says :

Lemma 2.1.2. The trace operator $\left.\underline{q} \in H(\operatorname{div} ; \Omega) \rightarrow \underline{q} \cdot \underline{n}\right|_{\Gamma} \in H^{-\frac{1}{2}}(\Gamma)$ is surjective.

Proof. Let $g \in H^{-\frac{1}{2}}(\Gamma)$ be given. Then, solving in $H^{1}(\Omega)$ $$ \int_{\Omega} \operatorname{grad} \phi \cdot \operatorname{grad} v d x+\int_{\Omega} \phi v d x=\langle g, v\rangle, \forall v \in H^{1}(\Omega) $$ and making $\underline{q}=\operatorname{grad} \phi$ implies $\left.\underline{q} \cdot n\right|_{\Gamma}=g$.

I think the proof is wrong...

The first doubt is why solving in $H^{1}(\Omega)$ $$ \int_{\Omega} \operatorname{grad} \phi \cdot \operatorname{grad} v d x+\int_{\Omega} \phi v d x=\langle g, v\rangle, \forall v \in H^{1}(\Omega) $$ instead of solving $$ \int_{\Omega} \operatorname{div} \underline{q} v d x+\int_{\Omega} \underline{q} \cdot \operatorname{grad} v d x = <g,v>\ \ \ \ \ \forall v\in H^{1}(\Omega) $$ The second doubt is about the last sentence in his proof making $\underline{q}=\operatorname{grad} \phi$. Besides, we just have $\phi \in H^{1}$, why it is sufficient to illustrate $\operatorname{grad}\phi \in H(\operatorname{div},\Omega)$

Can a discrete variable turn into a continuous variable as its denominator approaches infinity.

Posted: 07 Nov 2021 07:06 PM PST

I am defining a set of points on the cartesian plane based on their coordinates, and I use the variables m and n. n is the number of points, and for all $m\in \left \{ 1,2,3, ... ,n \right \}$ the x and y coordinates of these points are given by algebraic expressions in the terms $\frac{m}{n}$, $\frac{m+1}{n}$ and $\frac{1}{n}$. So m iterates through all natural numbers until it reaches n. For example: $$x-coordinate = \left ( \frac{n}{n+1} - \frac{m}{n+1} \right )\cdot \left ( \sin(\theta) - \frac{m}{n}(\sin\theta + 1) + \frac{\sin(\theta)}{n} + \frac{m}{n} \right ) $$

I haven't derived the expression for the y coordinate yet, but it will be in terms of these variables as well. Essentially, the points mark out the rough shape of some curve, and as the number of dots approaches infinity, the points will exactly resemble this curve. While I could use algebraic manipulation to express y in terms of x using the two expressions for the x and y coordinates, I feel this would take too much effort. I realised another way and am posting this question to confirm whether or not it is actually possible. You see, the fraction $\frac{m}{n}$ ranges through all values $$\frac{m}{n}\in\left \{ \frac{1}{n},\frac{2}{n}, ... , \frac{n}{n} \right \} $$ For lack of a better word, I call this a discrete variable since it can only take discrete values(although discrete variable probably means something else). However as $n\rightarrow \infty$, wouldn't the fraction start becoming continuous? In other words, would I be right to say that as $n\rightarrow \infty$, $\frac{m}{n}\rightarrow t$, where t is a parameter, or a "continuous variable"? I would then be able to replace $\frac{m}{n}$ with t in the expressions for the x and y coordinates, and end up with a parametric equation in t, which would give me the equation of the curve that I am looking for. And lastly, if this is true, then what does $\frac{m}{n+1}$ approach as $n\rightarrow \infty$?

Determine if the function is continuous at x=0, with g(0)=0, g'(0)=1.

Posted: 07 Nov 2021 07:16 PM PST

g(x) = \begin{cases} \frac{g(x)-sinx}{x} & x \neq0, \\0 & x=0 \end{cases}

I'm new to calculus so I'm not too sure how to solve this question, especially on the (g(x)-sinx)/x part.

$3$ Bracket Knockout Tournament Probability

Posted: 07 Nov 2021 07:29 PM PST

$12$ people in $3$ brackets ($A,B$, and $C$) compete against each other in a knockout tournament. The final round has $4$ contestants, the $3$ winners from brackets $A,B$ and $C$, plus $1$ of the dropouts from brackets $A$ and $B$ brought back into the game at random. What is the probability of winning for a player in group $A$ or $B$ (assuming equal chance to advance to the next stage of the tournament and equal chance to be brought back in).

The way I though of this was that there are two ways that a person in Group $A$ or $B$ can win. They can win the tournament by simply winning all their matches, or by losing, being brought back and winning the final round. Adding these two probabilities should give the answer.

I calculated their chance of winning by simply winning all their matches as $\frac1{12}$, since each contestant has equal chance to advance and there are $12$.

For the other scenario, I said that there is a $\frac12$ chance they lose their first match (since all participants have equal chance of advancing) , multiplied by a $\frac16$ chance they get voted back in, (since of the original $8$ people from $A$ and $B$ only $2$ will advance and there will be $6$ dropouts and all dropouts have equal probability of being brought back). Then multiplied by $\frac14$ since there are $4$ participants in the final round and each has equal chance of winning, So $\frac12 \cdot \frac16 \cdot \frac14$.

Overall I got $\frac1{12} + \frac12 \cdot \frac16 \cdot \frac14=\frac5{48}$, which is not the answer.

The options for the answer are:

  • $\frac3{16}$

  • $\frac3{32}$

  • $\frac1{16}$

  • $\frac18$.

Thanks

Counting nickels, dimes and quarters, order matters.

Posted: 07 Nov 2021 07:23 PM PST

A parking meter costs 5 cents per minute and only accepts the nickels, dimes or quarters. How many different ways can $n$ minutes be purchased by inserting coins into the parking meter where the order in which the coins are inserted matters? Let $A_n$ be the number of ways $n$ minutes can be purchased and assume that the correct initial values have been defined for $A_0, A_1, A_2$, etc. have been defined as needed. Recall that 1 quarter = 25 cents, 1 dime =10 cents, and 1 nickel = 5 cents.

Answer choices

  1. $A_n = A_{n-5} + A_{n-2} + A_{n-1}$
  2. $A_n = A_{n-25} + A_{n-10} + A_{n-5}$
  3. $A_n = A_{n-5} \cdot A_{n-2} \cdot A_{n-1}$
  4. $A_n = 5\cdot A_{n-5} +2\cdot A_{n-2} + A_{n-1}$

Part of my work

$A_n$ Value Count
$A_0$ 0 0
$A_1$ 5 1
$A_2$ 10 2
$A_3$ 15 3
$A_4$ 20 5
$A_5$ 25 9
$A_6$ 30 14

However, my $A_4$, $A_5$, $A_6$ don't fit any of the options given for recursive formulas. $A_{n-25}$ would be related 125 cents prior, not 25 cents prior, so option 2 is pretty clearly incorrect. Option 3 and option 4 seem to be growing far quicker than my table. Option 1 is the closest fit, but breaks down on $A_6$.

Laplace transform / gaussian random variable

Posted: 07 Nov 2021 07:09 PM PST

We have a gaussian random variable $X \sim N(0, \sigma^{2})$, with $\sigma^{2}$ unknown.

the Laplace transform given by:

$\phi(t) := \mathbb{E} [e^{tX}]$ = $e^{{\sigma}^{2} t^2/2}$

I need to make i.i.d $N$ copies of X and compute the empirical mean

$\phi_{N} (t) := \frac{1}{N} \sum_{i=1}^{N} e^{t X_i}$ and then calculate the confidence interval, but I am clueless where to start. Which properties of the expected value, gaussian rvs can I apply here?

Necessary and Sufficient Conditions for the mod-$l$ Representation Attached to an Elliptic Curve to be Unramified/Flat

Posted: 07 Nov 2021 07:23 PM PST

I am seeking References for facts from chapter I of Cornell-Silverman-Stevens, "Modular Forms and Fermat's Last Theorem."

On page 6 of the text, the authors provide in Theorem 2.11 some "basic facts" about Galois representations attached to elliptic curves, with no references given. The first half of the theorem about nonreduced $l$-adic Galois representations attached to elliptic curves is proven in Chapter 9 of Diamond and Shurman's "A First Course in Modular Forms," while I cannot seem to find a reference for the second half of the theorem regarding the unramification/flatness of the reduced $l$-adic Galois representation attached an a elliptic curve. Where can I find this information?

For reference, here is the theorem I would like to see a proof of or be given references for:

Let $E/\mathbb Q$ a semistable elliptic curve with minimal discriminant $\Delta_m(E)$. For $p$ prime, let $\overline{\rho}_{E, p}: G_\mathbb Q \to \operatorname{GL}_2(\mathbb F_p)$ be the reduced $p$-adic representation attached to $E$. If $l \neq p$ is prime, then $\overline{\rho}_{E, p}$ is unramified at $l$ iff $p | \operatorname{ord}_l(\Delta_m(E))$. We also have $\overline{\rho}_{E, p}$ is flat at $p$ iff $p | \operatorname{ord}_p(\Delta_m(E))$.

Relation between secant and tangent lines of a function

Posted: 07 Nov 2021 07:25 PM PST

Suppose $f: \mathbb R \rightarrow \mathbb R$ is a function with linear growth and $\lim_{x\rightarrow \infty} f'(x)$ exists and is finite. I'm trying to show that $\lim_{x\rightarrow \infty} f(x)/x$ also exists and establish conditions under which it is equal to $\lim_{x\rightarrow \infty} f'(x)$.

Show that $\lim_{n\to\infty}\int_0^1f(x^n)dx = f(0)$

Posted: 07 Nov 2021 07:28 PM PST

Given a continuous function $f:[0,1] \to R$, prove that $\lim_{n\to\infty}\int_0^1f(x^n)dx = f(0)$.

Attempt:

Let $u = x^n$, so $du = nx^{n-1}dx$. Then substituing $u$ in I got:

$\lim_{n\to\infty}\dfrac{\int_0^1f(u)du}{nx^{n-1}}$

Doesn't this limit go to $0$? I'm not sure which part I'm messing up on, any hints are appreciated.

What name describes the theory of the strength of a logical statement?

Posted: 07 Nov 2021 07:40 PM PST

Most people on this site are familiar with the idea of the "strength" of a statement. Ignoring the validity of the statements themselves, "All primes are even." is a logically weaker statement than "All integers are even." Even the statement "All integers except 7 are even." still seems qualitatively stronger than "All primes are even." even though neither implies the other.

What name describes the theory of the relative strength of logical statements? I've tried searching for a variety of keywords but have turned up surprisingly little besides references to the "intuitive" definitions like that above.

Question about Folland's proof of the Change of Variables

Posted: 07 Nov 2021 07:44 PM PST

In his proof, Folland shows that if $G$ is a diffeomorphism on $\Omega\subseteq \mathbb{R}^n$, and $U$ is an open subset of $\Omega$, then $$m(G(U))\le \int_U|\det D_xG|$$ To establish the same inequality for a Borel measurable set, he starts with an arbitrary Borel set $E\subseteq \Omega$ of finite measure and then uses a decreasing sequence of open subsets $(U_k)^\infty_{k=1}$, each of finite measure, which include $E$, with $m(\bigcap_k U_k)=m(E)$, to assert that $$m(G(E))\le m(G(\bigcap_k U_k))=\lim_k m(G(U_k))\le\lim\int_{U_k}|\det D_xG|=\int_E|\det D_xG|$$ where the equality of limits is justified by the Dominated Convergence theorem. But there is no justification for the finiteness of either $m(G(U_k))$ or the integral of $\int_{U_k}|\det D_x G|$ when $m(U_k)<\infty$ and thats why I'm puzzled.

Definition of the Cartier divisor $\operatorname{Tr}_YD$

Posted: 07 Nov 2021 07:10 PM PST

I am confused by the following definition (Definition 7.7.3 in the book "Algebraic Geometry II" by Mumford and Oda (in page 258 in its draft)):

Definition. If $X$ is an irreducible reduced scheme, $Y\subset X$ an irreducible reduced subscheme and $D$ is a Cartier divisor on $X$, then if $Y\not\subseteq\operatorname{Supp} D$, define $\operatorname{Tr}_Y D$ to be the Cartier divisor on $Y$ whose local equations at $y\in Y$ are just the restrictions to $Y$ of its local equations at $y\in X$.

My questions are:

  1. What word does Tr come from?
  2. What are the local equations of $D$ at $x\in X$?
  3. (Main question) What are the restrictions to $Y$ of the local equations of $D$ at $y\in Y$?

For the definition of Cartier divisors in the book, see section 3.6 (page 109 in the draft).

The following are my thought:

  1. Trace?
  2. I think that a local equation of $D$ at $x\in X$ is an $f$ such that $(U,f)$ represents $D$ and such that $x\in U$. In this case (i.e., $X$ integral) $f$ is an element of the function field $\mathbf R(X)$.
  3. Then the restriction to $Y$ of a local equation $f$ at $y$ must be an element of $\mathbf R(Y)$. I think this has something to do with the surjection $\mathcal O_X \to\mathcal O_Y$, but does it induce a map $\mathbf R(X)\to\mathbf R(Y)$ between function fields?

Thanks in advance.

Non-simple homogeneous components of a module.

Posted: 07 Nov 2021 07:10 PM PST

Let $M_R$ be a right $R$-module. A homogeneous component $H$ of $M_R$ is defined to be the sum $\sum_{i\in I}B_i$ where $\lbrace B_i \rbrace_{i\in I}$ is a family of mutually isomorphic simple submodules $B_i \subseteq M$. Is it true that if the homogeneous component $H$ is non-simple then $H$ is decomposable as $H=N\oplus K$ or $H=N \oplus K \oplus L$ where $N\cong K$ and $L$ is simple?

Thanks in advance.

Why the Cauchy product can be reranked in the proof of Mertens' Theorem?

Posted: 07 Nov 2021 07:05 PM PST

enter image description here

The prime Cauchy product is $C_n = (a_1b_1) + (a_1b_2+a_2b_1) + (a_1b_3+a_2b_2+a_3b_1)+...$, In the proof of Mertens' Theorem, $C_n = a_1(b_1+b_2+...+b_n) + a_2(b_1+...+b_{n-1})+...$. It's different way to sum. And the product after reranking is not Cauchy product. So why can rerank without any condition???

Combinatorics problem involving proving that the sequence exists(mp pigoenhole principle needs to be used)

Posted: 07 Nov 2021 07:27 PM PST

We have a series of 99 integers let they be $s_{i}$, such abs($s_{i}$)<= $10^{6}$ and $\sum s_{i} <= $ $10^{6}$ we have to create another series of 99 numbers let they be $t_{i}$ such that the sum of any consecutive terms of the first series can't be equal to the sum of any consecutive terms of the second series with the same previous conditions i.e abs($t_{i}$)<=1e6 and $\sum t_{i} <= 10^{6}$ . I think pigeonhole principle will be used but not able to figure out either the pigeons or holes Thanks!!

Stability analysis of non-linear system outside of equilibrium point

Posted: 07 Nov 2021 07:06 PM PST

I have a non-linear system that comes from the transformation matrix for angular rotational rate from Euler -> Body frame: $$ \dot{r} = A + sin(r)tan(p)B + cos(r)tan(p)C \\ \dot{p} = cos(r) B - sin(r) C $$ where A, B, and C are constants. The equilibrium point is at: $$ r_e = arctan(B/C)\\ p_e = arctan(-\frac{A}{B sin(r_e) + C cos(r_e)}) $$ When taking the Jacobian matrix and substitute the equilibrium point, I found that the real part of the eigenvalue is $$B cos(r)tan(p) - C sin(r) tan(p)$$ which always equals zero at the equilibrium point, so the system is marginally stable at the equilibrium point. This lead me to thinking that I need to check the system stability outside of the equilibrium point using Lyapunov theory. However, I've been struggling with finding the right candidate function.
When plotting the phase portrait of the system at some A, B, C values, I found that $r$ and $p$ seem to exhibit cyclical behavior like sine and cosine waves. For example, here's the phase portrait at A = 0, B = 0, C = 1 However, after closer examination, they are not exactly sine and cosine waves. So something like $r^2 + p^2$ for the candidate function wouldn't work. Here's the plot of r^2 + d^2 when simulated with dt = 0.0001. It turns out that the radius is slower growing, so for this choice of A, B, and C, the system is unstable. However, for A = 0.1, B = 0.1, C = -1.0, the system is stable (plot of r^2 + d^2), so I think there probably exist a candidate function where we can show that its derivative is strictly negative or positive depending on the choice of A, B, and C, but I'm not sure how I should go about finding this candidate function, and if Lyapunov theory is not the suitable approach here, what other analysis should I look into?

Thanks

What is the input of little o notation?

Posted: 07 Nov 2021 07:27 PM PST

In $$\sin x=x+o\left(x^{2}\right)$$

can i write $o\left(x\right)$ instead of $o\left(x^{2}\right)$, because i understand that little o means strictly less, and after the first term $x$ of $sin x$ in taylor expansion there is $-\frac{x^{3}}{3!}$ and $-\frac{x^{3}}{3!}$ is strictly less than $x^{2}$, but also $-\frac{x^{3}}{3!}$ is strictly less then $x$ so can i put $o\left(x\right)$ instead of $o\left(x^{2}\right)$?

Big O for two functions of the same order

Posted: 07 Nov 2021 07:28 PM PST

I am trying to understand Big O notation outside of algorithms in terms of 𝑓(𝑛)=𝑂(𝑔(𝑛)). I am specifically wondering about using this notation format for functions of the same order. For example say is it ok to say $$log(n) =𝑂(ln(n))$$ I read a little about the change of base proof that the change between any type of log is a constant, but I am having a hard time using wolfram to check the limits of two different bases of logs. My professor told me that for every log I see in these types of big O problems that the base can just be changed to whatever I want, but say I change the above to be:$$log_2(n) = 𝑂(ln(n))$$ Then that definitely can't be right because plugging in numbers show that the base 2 is always bigger. Just visually looking at these functions, they're always the same shape, but is it correct to say $$log(n) =𝑂(ln(n))$$

80% of population is vaccinated and 25% of infected had received the vaccine, what is the efficacy of the vaccine?

Posted: 07 Nov 2021 07:43 PM PST

80% of population is vaccinated and 25% of infected had received the vaccine before being infected, what is the efficacy of the vaccine?

Lets denote

$\Pr(V) = 0.8$ (V="vaccinated")

$\Pr(I)$ is not given (I = "infected")

$\Pr(V\mid I) = 0.25$

What is the efficacy of the vaccine?

I am dumbfounded as I need to have either $\Pr(I)$ or $1-Pr(I)$ to solve any probability question related to this problem. For example $\Pr(I \mid V) = \frac{Pr(V \mid I)Pr(I)}{Pr(V)}$ which can be simplified because we know both $Pr(V)$ and $Pr(V \mid I)$: $Pr(I \mid V)=\frac{0.25*Pr(I)}{0.8}$, but then we hit brick wall as Pr(I) is not given.

How to solve $\dot{x} = \frac{f(x)}{\|f(x)\|}$?

Posted: 07 Nov 2021 07:16 PM PST

How to solve the following ODE?

$$\dot{x} = \frac{f(x)}{\|f(x)\|},$$

where $x : \mathbb{R} \to \mathbb{R}^n$, i.e., $x(t)$ is the trajectory. The right-hand side $f : \mathbb{R}^n \to \mathbb{R}^n$ is a continuously differentiable function with respect to $x$. $\|\cdot\|$ is any vector norm.

I think the right-hand side of the ODE is not Lipschitz continuous. For example, let's take $n=1$ and $f(x)=-x$, then the right-hand side $-x/|x|$ is not even continuous. In this case, the theorem of existence and uniqueness of a solution cannot be applied. Then how to analyze the existence of the solution? Is there finite escape time?

Proof of Chow's lemma in EGAII

Posted: 07 Nov 2021 07:44 PM PST

Section 5.6 of EGAII is dedicated to Chow's lemma. I am having a hard time following an early step of the proof.

The version of Chow's lemma in the text assumes that $X$ is a separated $S$-scheme of finite type where $S$ is either noetherian or else more generally that $S$ is quasi-compact and $X$ has a finite number of irreducible components. In these cases it is conlcuded that

(i) there exists a quasi-projective $S$-scheme and a surjective, projective morphism $f:X'\to X$,

(ii) that there is an open $U\subseteq X$ such that $f|_{f^{-1}U}:f^{-1}(U)\to U$ is an isomorphism.

The step I am confused about is the one when the proof first reduces to the case when $X$ is irreducible. To do this it says to assumes that Chow's lemma was proved in the irreducible case, and thus taking $X_i$ to be the irreducible components, and $f_i:X'_i\to X_i$ satisfying the conditions of Chow's lemma, we have that if $X'$ is the disjoint union of $X_i'$ and $f$ the morphism from $X'\to X$ which restricts to $f_i$ on $X_i'$, then $f:X'\to X$ is a witness to Chow's lemma.

However, what confuses me here is how we can take the individual $X_i$. What is the scheme structure on them? If we take the reduced structure (which is what the proof seems to be saying to do), condition (ii) will not necessarily hold? Is there a way to fix this?

I notice that the Stacks project goes with a different approach for the proof and they seem to need that $S$ is actually noetherian. Is this way to fix it?

Thanks for any help

Regression with a Vandermonde matrix.

Posted: 07 Nov 2021 07:05 PM PST

From what I understand, when doing a least squares regression with a Vandermonde matrix, you're essentially solving the equation

$y=Xa$

Where $y$ is a vector of $y$-values, $X$ is the Vandermonde matrix, and $a$ is a vector of coefficients.

When you solve this equation for $a$,

$a=(X^TX)^{-1}X^Ty$

You get the above expression for $a$.

My understanding is that this should be a solution to the set of equations. However, it is possible to fit $n$ points of data to a $k$-th degree polynomial, where $n>k$. This would imply that there are more equations than unknowns, which results in no possible solutions. However, the vector a can be calculated. This resulting vector does not completely perfectly satisfy

$y=Xa$

But instead is a good approximation, as with regression.

Why can we get a value for $a$? As I do not believe this could be possible if we were dealing with $n$ equations, and $k$ unknowns. Where does the matrix solution differ from solving $k$ unknowns with $n$ equations?

A three variable binomial coefficient identity

Posted: 07 Nov 2021 07:43 PM PST

I found the following problem while working through Richard Stanley's Bijective Proof Problems (Page 5, Problem 16). It asks for a combinatorial proof of the following: $$ \sum_{i+j+k=n} \binom{i+j}{i}\binom{j+k}{j}\binom{k+i}{k} = \sum_{r=0}^{n} \binom{2r}{r}$$ where $n \ge 0$, and $i,j,k \in \mathbb{N}$, though any proof would work for me.

I also found a similar identity in Concrete Mathematics, which was equivalent to this one, but I could not see how the identity follows from the hint provided in the exercises.

My initial observation was to note that the ordinary generating function of the right hand side is $\displaystyle \frac {1}{1-x} \frac{1}{\sqrt{1-4x}}$, but couldn't think of any way to establish the same generating function for the left hand side.

No comments:

Post a Comment