Saturday, October 16, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Find an antiderivative $F(z)$ for the principal value of $z^i$, and find the domain in which $F'(z) = f(z)$

Posted: 16 Oct 2021 08:15 PM PDT

Find an antiderivative $F(z)$ for the principal value of $z^i$, and find the domain in which $F'(z) = f(z)$

We are given that $f(z) = PV(z^i)$, where $PV$ denotes the principal value of the function.
In general, the antiderivative of $f(z) = z^i$ would be $F(z) = z^{(i+1)}/(i+1)$, so would the antiderivative of $f(z) = PV(z^i)$ be $F(z) = PV(z^{i+1})/(i+1)$?

Would the domain in which $F'(z) = f(z)$ be $\{z\in \mathbb{C}: z\neq0 \land Argz \neq \pi\}$?

How to prove limit using epsilon-delta definition

Posted: 16 Oct 2021 08:16 PM PDT

How to prove $$\lim_{n \to \infty} \frac{n^2}{a^n} =0 \left ( a>1 \right ) $$

I tried using Bernoulli's inequality, but it's difficult to cancel out the term involing a.

How to prove Existence of $x_0 \in (0,1)$, s.t. $f'(x_0)+1/x_0 f(x_0)=2$

Posted: 16 Oct 2021 08:16 PM PDT

$f(x)$ is differentiable on $[0,1]$, $\int_{0}^{1}{f(x)}dx=1/2$, how to imply exist $x_0 \in (0,1)$, s.t. $f'(x_0)+1/x_0 f(x_0)=2$

I think it is equal to prove $\exists x_0, s.t. (x_0f(x_0))'=2x_0$ I tried to use $F(x)=xf(x)-x^2$ and use the Rolle theorem but failed. Then I tried $F(x)=\int_{0}^{x}{f(y)}dy$ and use the Taylor theorem but also failed.

Show that $\{X_n\}$ form a martingale.

Posted: 16 Oct 2021 08:09 PM PDT

Let $\{\xi_i\}^\infty_{i=0}$ be a sequence of real valued jointly distributed random variables that satisfy $E[\xi_i|\xi_0,\xi_1,...,\xi_{i-1}]=0,i=1,2,...$. Define $$X_0=\xi_0,\quad X_{n+1}=\sum^n_{i=0}\xi_{i+1}\mathcal{f}_i(\xi_0,\xi_1,...,\xi_i),$$ where $\mathcal{f}_i$ are a prescribed sequence of functions of $i+1$ real variables. Show that $\{X_n\}$ form a martingale.

Asymptotic integration of $\int_0^\infty\frac{x^{-\frac{1}{2}+a}J_{-\frac{1}{2}+a}(x\alpha)}{e^x-1}{\rm d}x$ when $\alpha>>1$

Posted: 16 Oct 2021 08:05 PM PDT

What is the asymptotic integration of

$$\int_0^\infty\frac{x^{-\frac{1}{2}+a}J_{-\frac{1}{2}+a}(x\alpha)}{e^x-1}{\rm d}x$$

when $\alpha\rightarrow\infty$. How to compute that using standard identities? Please give a general expression and then consider a special case where $a=\frac{5}{2}$.

Show that the function $f(z) = 1/(z-4)$ has an antiderivative in a Domain that contains the the Unit Circle (where the Unit circle is a contour)

Posted: 16 Oct 2021 08:04 PM PDT

Show that the function $f(z) = 1/(z-4)$ has an antiderivative in a domain that is containing C, where C is the contour that denotes the unit circle, $|z| = 1$.

The function $f(z) = 1/(z-4)$ is continuous on the domain $D = \{ z\in \mathbb{C}: z \neq4\}$, and this domain contains the unit circle. However, how would I go about finding the antiderivative? Is it some form of a complex logarithm? My thought process would that the antiderivative would be $F(z) = log(z-4) = lnr + i\theta$ where $|z-4|>0$ and $0<\theta<2\pi$. Is this on the right track?

I would appreciate it if anyone could give any help or hints on this problem.

Determine if the series converges absolutely, conditoinally, or diverges.

Posted: 16 Oct 2021 08:08 PM PDT

Determine if the series converges absolutely, converges conditionally, or diverges. Find the exact value for the sum of the convergent series. $$1-\frac{1}{5} - \frac{1}{5^2} + \frac{1}{5^3} - \frac{1}{5^4} - \frac{1}{5^5} + \frac{1}{5^6} - \frac{1}{5^7} - \frac{1}{5^8} ...$$ I have no clue where to start. I tried using the comparison test comparing this series to the series to the series $\sum_{n=1}^{\infty}\frac{1}{n^2}= 2$ but that only tells me that this is a convergent series, not what value it converges to or if it is absolutely convergent or conditional convergent. Any advice and tips on how to solve this problem and these types of in general would be greatly appreciated. Thank you in advance :)

Comparison between Zariski and analytic Picard functor

Posted: 16 Oct 2021 07:55 PM PDT

Let $f: X \rightarrow S$ be a projective morphism between varieties over $\mathbb{C}$. Under what condition can we compare the functor $\mathrm{Pic}_{X/S,Zar}$ and $\mathrm{Pic}_{X/S,an}$?

The situation that I'm interested in is when they are both representable, so for simplicity we may assume $f$ is flat, fibres of $f$ are integral, and cohomologically flat in dimension $0$.

Is every $\sigma$-algebra generated by a Borel function?

Posted: 16 Oct 2021 07:54 PM PDT

Let $X$ be a set and $\mathcal{F}$ a $\sigma$-algebra. Does there exist a topological space $U$ and a map $f:X \to U$ such that $f$ is ($\mathcal{F}, \mathcal{B}$)-measurable and $\sigma(f) = \mathcal{F}$? Here $\mathcal{B}$ is the Borel $\sigma$-algebra of $U$.

Of course, this is trivial if every $\sigma$-algebra on a set is the Borel $\sigma$-algebra with respect to some topology on the set. But this needn't be true. This is a weaker problem.

Taking limits in the integrand

Posted: 16 Oct 2021 07:54 PM PDT

Consider an integral expression $$I(\omega)=\int_{-\infty}^{\infty}G(\omega,t)dt$$ (1)
with $$G(\omega,t)=g(\omega,t)exp^{t/\tau}$$ Now suppose I am interested in the case where $t<<\tau$, then one could write $$G(\omega,t)=g(\omega,t)exp^{t/\tau}= g(\omega,t)(1+\frac{t}{\tau})$$ (2)

and use the last expression as the integrand. However, is this simple procedure trouble free? I have seen procedures like this being performed regularly, but it seems to me that this process is not correct. Because, the integration variable x ranges from $-\infty$ to $\infty$ and close to the upper limit the case $t<<\tau$ doesn't apply if we assume that $\tau$ is a finite value. Is my reasoning correct? If so, are there mathematically correct ways to study the $t<<\tau$ case?

Calculating the Covariance for a random variable/zero-mean intrinsically stationary random field

Posted: 16 Oct 2021 07:54 PM PDT

I am struggling to work out the Covariance for the following question:

enter image description here

Prove that there exists an embedding $h: M\rightarrow \mathbb{R}^n$ such that $h(M)\cap A=\phi$.

Posted: 16 Oct 2021 07:53 PM PDT

Let $A$ is a submanifold of $\mathbb{R}^n$ with $\dim A<n$, $M^m$ is a differential manifold. Suppose $n\geq 2m+1$. Prove that there exists an embedding $h: M\rightarrow \mathbb{R}^n$ such that $h(M)\cap A=\phi$.

I just learned the Whitney embedding theorem. Intuitively, $\dim A<n$ implies $A$ is of measure $0$ in $\mathbb{R}^n$, so the image of $h$ is not likely to "touch" $A$.

Can the proposition be proved by modifying the proof of Whitney embedding theorem? Appreciate any help

Series of non-negative terms

Posted: 16 Oct 2021 07:49 PM PDT

Say I have an infinite series $\sum_{n=1}^{\infty} a_n$ of non-negative terms which converges. Can I assert that for any $N \in \mathbb{N}$, $$ \sum\limits_{n=0}^N a_n \leq \lim\limits_{N \to \infty} \sum\limits_{n=0}^{N} a_n = \sum\limits_{n=0}^{\infty} a_n $$ I'm not sure how to prove this from the standard limit theorems, since the notion of a "limit" of an infinite series is a bit abstract for me.

How to prove $|z_1+z_2|<|1+\bar{z_1}z_2|$ if $|z_1|<1$ an $|z_2|<1$

Posted: 16 Oct 2021 08:05 PM PDT

I would like to prove that $|z_1+z_2|<|1+\bar{z_1}z_2|$ if $|z_1|<1$ an $|z_2|<1$.

I tried to multiply the numerator and denominator by $|1+z_1\bar{z_2}|$ or consider the square, but I couldn't finish anything. Someone has a hint as I can solve this problem?

Expressing A = LU in smaller matrix size

Posted: 16 Oct 2021 08:14 PM PDT

I was wondering if we can express A as the product of a 5 x 2 matrix and a 2 x 5 matrix for A = LU where:

A = [3 2 1 0 -1; 6 6 1 1 -2; -3 -6 1 -2 1; 6 6 1 1 -2; 0 2 -1 1 0]

L = [1 0 0 0 0; 2 1 0 0 0; -1 -2 1 0 0; 2 1 0 1 0; 0 1 0 0 1]

U = [3 2 1 0 -1; 0 2 -1 1 0; 0 0 0 0 0; 0 0 0 0 0; 0 0 0 0 0]

Which factorization can I use and how do I solve it to express A as the product of a 5 x 2 matrix and a 2 x 5 matrix?

Fourier transform of the Bochner-Riesz multipliers

Posted: 16 Oct 2021 07:37 PM PDT

How to obtain the decay of Fourier transform of the Bochner-Riesz multipliers? For $\lambda>0$ define: $\hat{m_{\lambda}}(x)=\int_{\mathbb{R}^d} (1-|\xi|^2)_{+}e^{2\pi i x\cdot \xi}d\xi$, then $|\hat{m_{\lambda}}(x)|\sim |x|^{-\frac{d+1}{2}-\lambda},|x|\rightarrow \infty$.

The classic way to do this is to compute them exactly using Bessel functions, while Tao mentions in his notes (1999) that there is another more robust way called "fuzzier", but I don't get his point. Can someone explain this in details or provide other methods without using special functions?

PS: The "fuzzier" method he mentions: we can assume $x=\lambda e_d$, then:

First: $\int_{|\xi|\leq 1}(1-|\xi|^2)_{+}e^{2\pi i x\cdot \xi}d\xi=\int_{|\xi_{d}-1|\ll1}+\int_{|\xi_{d}+1|\ll1}+\int_{other}$, the "other" is easy.

Second: $(1-|\xi|^2)_{+}=(fd\sigma)*\mu+error$, where f is an appropriate function on $\mathbb{S}^{d-1}$ and $\mu(\xi)=\delta_0(\xi^{'})\eta(\xi_d)(-\xi_d)_{+}^{\lambda}$ ($\xi=(\xi^{'},\xi_d)$) such that the "error" term vanishes to order $\lambda+1$.

Third: we can repeat these procedures above and get error term with order enough to get the decay, and the front term could also be tackled.

I don't understand what happens in the second and third. (How to select the $f$? What is the "vanishing order" which can lead to decay?)

Chain rule question for derivative of inverse

Posted: 16 Oct 2021 08:17 PM PDT

I feel like I'm missing something obvious here. I understand the graphical proof of $f^{-1'}[a] = {1\over f{'}[f^{-1}(a)]}$, but I'm stuck on one aspect of the simple non-graphical proof. Start with $y = f^{-1}(x)$, and thus $x = f(y)$. If we then take $$f(f^{-1}(x)) = f(y) = x$$ and do implicit differentiation with the chain rule, we get $$\mathrm{f^{'}[y]}\!\cdot\!\mathrm{{dy\over dx}} = 1$$ and the rest follows when you note that ${dy\over dx}$ at $a$ is $f^{-1'}[a]$.

All works out well when you take $f^{'}[y]$ to be ${df\over dx}(y)$. And here's where I get stuck: this is not the derivative that you get when you apply the chain rule (it seems to me, at least). Applying the chain rule to a composition of functions gives the following: $${d\,g(h(x))\over dx} = \mathrm{dg\over \mathbf{dh}}\!\cdot\!\mathrm{dh\over dx}$$ and NOT $${d\,g(h(x))\over dx} = \mathrm{dg\over \mathbf{dx}}\!\cdot\!\mathrm{dh\over dx}$$ So what the differentiation should really give is: $$\mathrm{{df\over dy}}\!\cdot\!\mathrm{{dy\over dx}} = 1,\,\,\,\mathrm{in\,other\,words\!\!:}\,\mathrm{{df\over df^{-1}}}\!\cdot\!\mathrm{{dy\over dx}} = 1$$ This gives a very different result. Example: suppose $f = x^{2}$ so that $y = \sqrt{x} $. On the one hand, ${df\over dx} = 2x$, but on the other hand, ${df\over dy} = \mathrm{df\over dx}\!\cdot\!\mathrm{dx\over dy} = \mathrm{2x}\!\cdot\!\mathrm{2y} = 4x^{{3\over 2}}$. What am I missing?! Thanks in advance.

Process where you add an unused factor

Posted: 16 Oct 2021 07:32 PM PDT

Consider the following process:

  1. Choose a positive integer $x$.
  2. Choose a proper factor $f$ of $x$ that hasn't been used before ($f \neq 1$ and $f \neq x$). If there is no such $f$ then terminate.
  3. Set $x$ to $x+f$ and go to step 2.

For example, suppose we start with 4. 2 is a factor of 4, so we go to 6. We cannot reuse the factor 2, so we must use factor 3, going to 9. We cannot go any further in this case.

Is this process guaranteed to terminate or can it go on indefinitely? I don't know the answer and not sure where to begin.

why almost complex manifold have $d = \mu + \bar{\mu} + \partial + \bar{\partial}$ , complex manifold only $\partial+\bar{\partial}$

Posted: 16 Oct 2021 07:32 PM PDT

Let $X$ be some almost complex manifold,with tangent bundle equipped with some complex structure $I$.

Then we know the exterior derivative decomposed into four part $d= \mu + \bar{\mu} + \partial + \bar{\partial}$ with $\mu $ component in $\mathcal{A}^{(p+2,q-1)}$ ,$\bar{\mu}$ component in $\mathcal{A}^{(p-1,q+2)}$.Where $\mathcal{A}^{p,q}$ means the differential form in type $(p,q)$.

I can't figure out the difference between complex manifold and almost complex manifold,we know on complex manifold $d = \partial +\bar{\partial}$.The proof is as follows,since:

$$d(f)=\sum_{i} \frac{\partial f}{\partial x_{i}} d x_{i}+\sum_{i} \frac{\partial f}{\partial y_{i}} d y_{i}=\sum_{i} \frac{\partial f}{\partial z_{i}} d z_{i}+\sum_{i} \frac{\partial f}{\partial \bar{z}_{i}} d \bar{z}_{i}$$

we have $$d(fdz_{I}\wedge d\bar{z}_J) = \sum_k \partial_{z_k}{f}dz_k\wedge dz_{I}\wedge d\bar{z}_J +\sum_l\partial_{\bar{z}_l}{f}d\bar{z}_l\wedge dz_{I}\wedge d\bar{z}_J$$

Hence $d = \partial + \bar{\partial}$.

The question is why almost complex manifold has $\mu$ component?It looks the proof is almost the same?

Existence of injective and non surjective functions

Posted: 16 Oct 2021 08:06 PM PDT

Let $X = [0,1] \times [0,1]$ and $Y = [0,1]$. Is there an injective and non-surjective $f : X \longrightarrow Y$ function?

Thanks in advance.

Finding the supremum and infimum

Posted: 16 Oct 2021 07:35 PM PDT

Given, $S\subset \Bbb Q$, where, $S = \{(2n-1)/n:n\in\Bbb N\}$

Find if S has a supremum and infimum in $\Bbb Q$ and $\Bbb R$. If so, what are the values?

My answer:

Supremum in $\Bbb R$ is where the sequence converges as n approaches infintity. Some quick infinite limits shows that it is 2.

Infimum in $\Bbb R$ is when $n=1$, so $(2n-1)/n = (2(1)-1)=1$

I'm a little less certain about $\Bbb Q$. I think it may have the same infimum as $\Bbb R$ and it does not have a supremum but I'm not sure how to show it

Writing $\phi$ and $\theta$ in terms of time difference of arrival

Posted: 16 Oct 2021 07:40 PM PDT

I have an experimental setup consisting of $3$ receivers (with known locations $\langle x_i,y_i,z_i\rangle$, $\langle x_j,y_j,z_j\rangle$, $\langle x_k,y_k,z_k\rangle$), and a transmitter with unknown coordinates $\langle x_m, y_m, z_m\rangle$ (emitting a signal with known speed $c$).

I'm given the time of arrival of a signal emitted by the transmitter for each receiver $t_i, t_j, t_k$. I've previously solved the problem of determining the transmitter coordinates in $3$-dimensions using $4$ receivers, and in $2$-dimensions using $3$ receivers given that the transmitter and receivers are co-planar.

In this case, as I have only $3$ receivers, I have insufficient data to determine the $3$-dimensional coordinates of the transmitter. Instead, what I'd like to do is solve for the transmitter's spherical coordinates $\theta$ and $\varphi$.


For some reason, my brain is failing me, and I can't seem to get $\theta$ and $\varphi$ in terms of the receiver coordinates, signal velocity, and/or times of signals arrival. I know it can be done (I was able to do this successfully some time ago).

Once I have the equations, I should be able to manage solving them computationally. I simply need to work out the equations..

How do I write $\theta$ and $\varphi$ in terms of the knowns?


EDIT:

Here's what I have thus far (relatively easy to show)...

Assume that the source emits plane waves. Below, the grey lines indicate the isoplanes, and the red diamonds indicate receivers $1$ and $2$ (receiver $3$ not shown).

enter image description here

Clearly, $d_t = d\cos\theta$. However, $d_t$ also equals $c\tau$, where $\tau$ is the propagation delay between the two receivers. We may then write:

$$d\cos\theta = c\tau$$

...yielding $\theta$.

Now, for $\varphi$...

Unusual variable mass density integral

Posted: 16 Oct 2021 08:00 PM PDT

I could really use some help with an unusual variable mass density integral.

Using this expression that has variable "h" that goes from 0 - 1 (meters) and when multiplied by a fixed radius of 9.25E+7 (Planck lengths) outputs a certain amount of mass.

$${\sqrt{\frac {\hbar{h} }{4G}}} * r$$

"h-bar" is planck's reduced constant = 1.0546 × 10-34

"G" is the gravitational constant = 6.67 × 10-11

So for example, in the image below - there are three spheres, each with a different value of "h" but all with the same radius of 9.25E+7 (Planck lengths). As you can see each sphere has a different amount of mass based on a unique value of "h".

enter image description here

What I'm trying to solve and need an integral for is this. Between 0 - 1 (for the variable "h") there would be an infinite number of spheres produced each with a slightly different amount of mass.

If I took an infinitesimal slice (a disk) from each sphere and then stacked those disks (each with a different mass density), they would form a cylinder with a variable mass density looking like the following image.

Disclaimer: My inserting math equations into StackExchange is not that great so I added the incomplete integral I've been working directly into the image. Even though I know it's wrong I want to show that I do have a limited understanding of what I'm attempting to do.

So each dish, which has a mass density based on the variable "h" from the original expression matches the height location of where that disk is stacked in the cylinder.

For example, the disk that came from the sphere with an "h" value of 1 is at the very top of the cylinder at 1 meter.

The disk that came from the sphere with an "h" value of 0.5 is at the middle of the cylinder at 0.5 meters. The variable "a" in the integral equals the maximum radius which is 9.25E+7 (Planck lengths).

I need a way to be able to calculate the total mass of this cylinder that has a height of 1 meter.

enter image description here

I'm certain the integral itself needs to be a function of the original expression in some way and then taking an infinitesimal disk of the spheres it produces, stack those infinite disks into a cylinder, and then calculate the total mass.

$${\sqrt{\frac {\hbar{h} }{4G}}}$$

As you can see, I think I'm way over my head so any help would be greatly appreciated.

I hope I explained this clearly and appreciate any help given ~ thanks.

The smallest value of the expression $4x^2y^2+x^2+y^2-2xy+x+y+1$

Posted: 16 Oct 2021 07:36 PM PDT

What is the smallest value that $4x^2y^2+x^2+y^2-2xy+x+y+1$ can take with real numbers $x$ and $y$?

I suspect the following transformation can be done: $(2xy-1/2)^2 + (x+1/2)^2 + (y+1/2)^2 + 1/4$.

How to find the exact value of the integral $ \int_{0}^{\infty} \frac{\sin ^{2n+1} x}{x^{2}} d x$, where $n$ is a natural number?

Posted: 16 Oct 2021 07:48 PM PDT

Let $I(m,n):=\displaystyle \int_{0}^{\infty} \frac{\sin ^{m} x}{x^{n}} d x$, where $0<n\leq m.$

A month ago, I had found,in my essay, that when $m$ and $n$ are of the same parity and $2\leq n\leq m,$ $$\boxed{\int_0^{\infty} \frac{\sin^{m}x}{x^{n}}dx=\frac{(-1)^{\frac{m-n}{2}} \pi}{2^{m}(n-1) !} \sum_{k=0}^{\left\lfloor\frac{m-1}{2}\right\rfloor}(-1)^{k}\left(\begin{array}{l} m \\ k \end{array}\right)(m-2 k)^{n-1}}, $$

In other words, $I(m,n) =\dfrac{a\pi}{b}$ for some natural numbers $a$ and $b$.

However, when $m$ and $n$ are of different parity, $$I(m,n) =\dfrac{a}{b}ln(\dfrac{c}{d})$$ for some natural numbers $a, b, c $ and $d$.

Then I started to investigate whether there is a formula for $I(m,n)$ of different parity.

Started from easy, I am going to find, by Frullani's Integral Theorem), a formula for, $$I(2n+1,2)=\int_{0}^{\infty} \frac{\sin ^{2n+1} x}{x^{2}} d x =\frac{2 n+1}{2^{2 n}(-1)^{n-1}} \sum_{k=0}^{n-1}(-1)^{k}\left(\begin{array}{c} 2 n-1 \\ k \end{array}\right) \ln \left|\frac{2 n+1-2 k}{2 n-3-2 k}\right|. $$ Proof: $$ \begin{aligned} I(2 n+1,2) &=\int_{0}^{\infty} \sin ^{2 n+1} x d\left(-\frac{1}{x}\right) \\ &\stackrel{IBP}{=} \int_{0}^{\infty} \frac{(2 n+1) \sin ^{2 n} x \cos x}{x} d x \\ &=(2 n+1) \int_{0}^{\infty} \frac{\sin 2 x}{2 x} \cdot \sin ^{2 n-1} x d x \end{aligned} $$ Expressing $\sin^{2n-1}x$ as a linear combination of $\sin(2n-1-2k)$ yields $$\begin{array}{l} \\ \displaystyle \quad I(2n+1,2)\\ \displaystyle =\frac{2 n+1}{2} \int_{0}^{\infty} \frac{\sin 2 x}{x} \left[\frac{1}{2^{2 n-2}(-1)^{n-1}} \sum_{k=0}^{n-1}(-1)^{k}\left(\frac{2 n-1}{k}\right) \sin (2 n-1-2 k) x \right]dx\\ \displaystyle =\frac{2 n+1}{2^{2 n-1}(-1)^{n-1}} \sum_{k=0}^{n-1}(-1)^{k}\left(\begin{array}{c} 2 n-1 \\ k \end{array}\right)\int_{0}^{\infty} \frac{\sin 2 x \sin (2 n-1-2 k)x}{x} d x \\ =\displaystyle \frac{2 n+1}{2^{2 n}(-1)^{n-1}} \frac{n-1}{k=0}^{n-1}(-1)^{k}\left(\begin{array}{c} 2 n-1 \\ k \end{array}\right) \int_{0}^{\infty} \frac{\cos (2 n-3-2 k) x-\cos (2 n+1-2k)x}{x} dx\\ \displaystyle =\frac{2 n+1}{2^{2 n}(-1)^{n-1}} \sum_{k=0}^{n-1}(-1)^{k}\left(\begin{array}{c} 2 n-1 \\ k \end{array}\right) \ln \left|\frac{2 n+1-2 k}{2 n-3-2 k}\right|\quad \blacksquare \end{array}$$(The last step using Frullani's Integral Theorem)

For examples: $$ \begin{aligned} \int_{0}^{\infty} \frac{\sin ^{5} x}{x^{2}} d x &=\frac{-5}{16}\left[\left(\begin{array}{c} 3 \\ 0 \end{array}\right) \ln 5-\left(\begin{array}{l} 3 \\ 1 \end{array}\right) \ln 3\right]=\frac{5}{16} \ln \frac{27}{5} \end{aligned} $$ $$ \int_{0}^{\infty} \frac{\sin ^{7} x}{x^{2}} d x=\frac{7}{64}\left[\left(\begin{array}{l} 5 \\ 0 \end{array}\right) \ln \left(\frac{7}{3}\right)-\left(\begin{array}{c} 5 \\ 1 \end{array}\right) \ln \left(\frac{5}{1}\right)+\left(\begin{array}{l} 5 \\ 2 \end{array}\right) \ln \left(\frac{3}{1}\right)\right]= \frac{7}{64} \ln \left(\frac{137781}{3125}\right) $$ Eureka! I succeeded to find a formula for $I(2n+1,2)$. I believe that we can find a reduction formula and use Induction to prove that when $m$ and $n$ are of different parity, $$I(m,n) =\dfrac{a}{b}\ln(\dfrac{c}{d})$$ However, it is difficult to find a general formula for $I(m,n)$ of different parity.

Please give me opinions to go further. Thank you very much!

natural inclusion $ε(R)→E(K)$,$x→x$, regarding Neron model of elliptic curve

Posted: 16 Oct 2021 07:35 PM PDT

Let $E$ be an elliptic curve over field $K$, and $ε$ be an Neron model of $E/K$. I heard there is natural inclusion $ε(R)→E(K)$, $x→x$.

I understand $ε(R)=${$R$-morphisms, Spec$R→ε$}(sections of $R$-scheme $ε$) , $E(K)=${$K$-morphisms, Spec$K→E$}($K$-rational point), but I cannot understand why the titled map is inclusion.

Could you give me explanation why $S$-morphisms, Spec$R→ε$ is regards as $K$-morphisms, Spec$K→E$ ?

(In the situation $E$ and $ε$ are defined by the same equation・・・①,$ε(R)$ and $E(R)$ corresponds to $R$($K$)-rational point of $E$, so the inclusion is obvious, but in general ① is not supposed)

Thank you in advance.

On probabilities involving Poisson processes

Posted: 16 Oct 2021 08:09 PM PDT

Question

Arrivals of Buses A and B at a particular bus stop form independent Poisson processes with rate parameters $\lambda_1$ and $\lambda_2$ respectively.

$(a)\quad$ What is the probability that exactly 4 buses (of type A and/or B) arrive in the time interval $[0, t]$?

$(b)\quad$ What is the probability that exactly 3 Bus Bs arrive while I am waiting for a Bus A?

$(c)\quad$ If half of the buses break down before they reach my stop, then what is the probability that not a single bus passes me in the time interval $[0, t]$?

Hints

$(i)\quad$ Recall that $$\int^{\infty}_0 x^{\alpha - 1} e^{-\beta x}\ \mathrm{d}x = \frac {\Gamma(\alpha)} {\beta^{\alpha}}.$$

$(ii)\quad$ In $(c)$, you may assume that thinning has occurred.

My working

$(a)$

Let $W$ be the random variable denoting the number of buses arriving in the interval $[0, t]$.

$$\implies W \sim \mathrm{Poisson}((\lambda_1 + \lambda_2)t)$$

$$\implies \mathbb{P}(W = 4) = \frac {[(\lambda_1 + \lambda_2)t]^4 e^{-(\lambda_1 + \lambda_2)t}} {4!}$$

$(b)$

Let $X$ and $Y$ be the random variables denoting the waiting time for Buses A and B respectively.

$$\implies X \sim \mathrm{Exponential}(\lambda_1)\ \mathrm{and}\ Y \sim \mathrm{Exponential}(\lambda_2)$$

$$\implies \mathbb{P}(Y < X) = \frac {\lambda_2} {\lambda_1 + \lambda_2}$$

Thus, the required probability is $$\left(\frac {\lambda_2} {\lambda_1 + \lambda_2}\right)^3.$$

$(c)$

In my lecture notes, thinning is described as follows: Let $X$ be a Poisson random variable with parameter $\lambda$ and suppose that, instead of being able to count all events that occur, we only get to observe each event with probability $p$ and with probability $1 - p$ we miss it. If $Y$ is the random variable counting the number of events we actually got to see (as opposed to the number of events that actually occurred, which is what $X$ was counting), then $Y$ is also Poisson with parameter $p\lambda$ and is sometimes called a thinned Poisson variable, since the actual number of events that occur is "thinned out" before they are counted by $Y$.

However, I am not sure how to apply this concept to $(c)$.


As I have only just covered Poisson processes, I would like to know if my answers to parts $(a)$ and $(b)$ are correct and any intuitive explanations as to what the solution for $(c)$ is would be greatly appreciated :) Moreover, where does Hint $(i)$ come into play?

Solving $\int_0^{\infty} \frac{\sin^m(x)}{x^n} dx$ for $m, n \in \mathbb{Z}^+$

Posted: 16 Oct 2021 07:39 PM PDT

I saw this question and thought of the more general integral $$I(m, n) =\int_0^{\infty} \frac{\sin^m(x)}{x^n} dx$$

where $m, n$ are positive integers, and the integral converges. The integral converges when $2 \le n \le m$ or $n = 1, m$ is odd.

I tried substitution by parts, but was not able to get anywhere. Setting $u = \sin^m(x)$ and $dv = \frac{dx}{x^n}$ when $n \not= 1$, I got $$\frac{\sin^m(x)}{(1-n)x^{n-1}}\Biggl|^\infty_0-\int_0^{\infty}\frac{m\cos(x)\sin^{m-1}(x)}{(1-n)x^{n-1}} dx=\frac{m}{n-1}\int_0^{\infty}\frac{\cos(x)\sin^{m-1}(x)}{x^{n-1}} dx$$

This seems just as bad, if not worse, than the original integral. Even when I tried setting $u$ and $dv$ to something else, I could not get something that looked easier to solve.


Switching tracks completely, I will try to use Feynman's trick (in a similar way as Mark Viola did in the top answer to the linked question). We can write $$F(s) = \int_0^{\infty} \frac{\sin^m(x)}{x^n} e^{-sx}dx$$

Differentiating $F(s)$ with respect to $s$, $n$ times, yields $$F^{(n)}(s) = \int_0^{\infty} (-1)^n e^{-sx}\sin^m(x) dx$$

The integral can be rewritten as a rational function, but I realized that it would become extremely messy to integrate that $n$ times. Because of this, I didn't even bother finding the analytical expression of this.


I found without proof that $I(2k+1, 1) = \frac{\binom{2k}{k}\pi}{2^{2k+1}}$

How can I solve the integral, whether by substitution by parts, Feynman's trick, or some other method?

Differential Equation Describing the Flowing of Water Out of a Tank

Posted: 16 Oct 2021 08:02 PM PDT

I have the differential equation $${{dh} \over {dt}} = - K\sqrt h $$

describing water flowing out of the bottom of a tank of uniform cross-sectional area under the action of gravity. $h(t)$ is the water depth at time t with ${h_0}$ being the initial depth. K is a positive constant.

I have no idea how to solve this. I only know how to separate variables to solve DEs. How would I solve this one? This is one of the starred questions in the book. I can't find another example like this. Thanks!

What are some Applications of the Permanent of a Matrix?

Posted: 16 Oct 2021 08:00 PM PDT

I have a decent understanding of the determinant of a matrix in terms of its role in

  • Telling you if a matrix is invertible (zero vs. nonzero)
  • Expressing the product of a matrix's eigenvalues with multiplicities
  • Representing the constant term in a matrix's characteristic polynomial
  • Having geometrical interpretations

However, I was curious to learn in the operation known as the permanent has any interesting properties. The permanent is defined in the same way a determinant is, but all entries are added instead of alternating between positive and negative terms.

In particular, what are the significant applications of defining the permanent as a notable matrix operation? Also, are there any matrix problems whose solutions directly/indirectly depend on understanding the permanent?

No comments:

Post a Comment