Monday, May 9, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Continuous-time Markov chains: system with components in parallel and enough repairmen. How to calculate availability?

Posted: 09 May 2022 03:13 PM PDT

I am trying to solve a continuous-time Markov chain exercise. But I have been stuck for a few days and I don't know how to continue or even If what I have done is correct.

Consider a system having five identical independent components.

Each component works for an exponential time, measured in days, and parameter 4.

When a component breaks down, it is replaced immediately as sufficient technicians are available to address any failure at any time. The duration of a repair is also exponential and on average lasts half a day.

Determines, in the long run, how long the system is out of service because it has all its components damaged.

I consider state n as the state where exactly n components are under repair. So I built the transition matrix Q as below:

$$ Q=\begin{pmatrix} -20 & 20 & 0 & 0 & 0 & 0\\ 2 & -18 & 16 & 0 & 0 & 0\\ 2 & 2 & -16 & 12 & 0 & 0\\ 2 & 2 & 2 & -14 & 8 & 0\\ 2 & 2 & 2 & 2 & -12 & 4\\ 2 & 2 & 2 & 2 & 2 & -10\\ \end{pmatrix} $$

I'm not totally sure that the half under the diagonal is totally correct.

If it is, any hint how to calculate the system availability in the long run?

Exponential map definition

Posted: 09 May 2022 03:00 PM PDT

I have a general question about the definition of the exponential map. I am taking the definition on page 72 of John M. Lee of "Introduction to Riemannian Manifolds" (https://www.maths.ed.ac.uk/~v1ranick/papers/leeriemm.pdf). The part where they define the exponential map on the vectors whose corresponding geodesic are defined on [0,1] seems a bit arbitrary. What would change about the exponential map if we took [0,2] instead as the definition? Thanks for any help!

How do I prove that the weak law of large numbers holds?

Posted: 09 May 2022 03:10 PM PDT

We have given $X_1,X_2,...$ an i.i.d. sequence of random variables with $$\Bbb{P}(X_1=1)=\Bbb{P}(X_1=-1)=\frac{1}{2}$$ From class we know that then the characteristic function is $\Phi_{X_i}(t)=\cos(t)$ for all $t\in \Bbb{R}$ and we know that $\lim_{n\rightarrow \infty} \cos^n\left(\frac{t}{n}\right)=1$ for all $t\in \Bbb{R}$. Now I need to use these two to show that $$\frac{X_1+...+X_n}{n}\rightarrow 0$$in probability.

I have unfortunately no idea where to start and how to do this using the two points I listed above. Could maybe someone give me a hint?

Thanks for you help.

Upper triangulation of a matrix versus diagonalization

Posted: 09 May 2022 03:07 PM PDT

I am trying to google this question, but could not find any hints. This is important to me because of I am dealing with 3-4D matrices.

It's true that an upper triangulation (Gauss elimination) of a matrix is diagonalizable iff the original matrix is diagonalizable?

Or some result related to this?

Thank you!

Parallel hyperplanes in $c_0$.

Posted: 09 May 2022 02:48 PM PDT

Consider $c_0$ – space real zero limit sequences. Let's consider $B_1(\bar{x})$ unit closed ball in $c_0$. Consider $H = \{x : f(x) = 0\}, $ where $f(x) = \displaystyle \sum_{i = 1}^{\infty} 2^{-i} x_i$. We want to know if there is an supporting hyperplane in $B_1(\bar{x})$ with, which is parallel to $H$.

My attempt: suppose there is a supporting hyperplane which is parallel to $H$ (with supporting point $a$). It should be of form: $\{f(x) = k\}, k\in \mathbb{R}$. Hence if all $B_1(\bar{x})$ in $\{f(x) \ge 0\}$ or $\{f(x) \le 0\}$, then we can consider a symmetric point $-a$ for which we have $f(-a) = -f(a)$. Contradiction.

I'm not sure about one point: if we have two parallel hyperplanes, then theirs equation are different in constant? I.e. $\{f(x) = 0\}$ and $\{f(x) = k\}$ like in our case.

Defnite integral of the square root of a fourth degree polynomiae

Posted: 09 May 2022 03:13 PM PDT

I am trying to solve

$$ \int_{b_1}^{b_2}\sqrt{(b-b_1)(b_2-b)(b_3-b)(b_4-b)} db $$

where $ b1 < b2 < b3 < b4 $. I am quite sure this is some linear combination of elliptic integrals, but I cannot find the right way to find the right expression. My first idea was to make a change of variable in order to get

$$ K_0\int_{-1}^{1} \sqrt{(1-t^2)(T_3-t)(T_4-t)} $$

by $ t=(2b - b_1- b_2)/(b_2-b_1) $ and where $K_0 = (b_2-b_1)^3/8$. But I am stuck at this point.

mean time average of successive displacements between two orbits

Posted: 09 May 2022 02:44 PM PDT

Suppose $x_{n+1}=A_nx_n$ is a two dimensional linear non-autonomous dynamical system and starting at $x_0\neq0$ and $x_0+\delta_0\neq 0$ we follow the dynamics of the distance between the two orbits, which will be $\delta_{n+1}=A_n\delta_n$ by linearity. Assume that the usual time average vanishes for any $x_0$ and $x_0+\delta_0$ as above, i.e. $\lim_{n\longrightarrow\infty}\frac{1}{n}\sum_{k=0}^{n-1}\log\Vert\frac{\delta_{n+1}}{\delta_{n}}\Vert=0$. I often read that in this case the 'system is conservative' or 'in some sort of cycle/steady state', but never seen a proof. How do you actually prove it?

Function defined through integral not continuous

Posted: 09 May 2022 02:44 PM PDT

I came across a problem which I can't resolve. Let $f(x,y)=\begin{cases} x^2 sin(1/x)e^{-x^2 |y|} \text{ if } x \neq 0 \\ 0 \text{ if } x = 0 \end{cases}$. This function is continuous and $y \rightarrow f(x,y)$ is integrable with $g(x):= \int_\mathbb{R} f(x,y)dy = \begin{cases} 2sin(1/x) \text{ if } x \neq 0 \\ 0 \text{ if } x=0 \end{cases}$.

Now $g$ is not continuous in $0$. I would expect this to be the case however. It seems to me that if $(-\varepsilon,\varepsilon)$ is a neighbourhood of $x=0$, I would define $h(y)=\varepsilon ^2 e^{-\varepsilon ^2 |y|}$ and I think here $|f(x,y)|\leq h(y)$ for all $x \in (-\varepsilon,\varepsilon)$ and all $y \in \mathbb{R}$ and $h$ is integrable.

However, from dominated convergence, $g$ should be continuous if such $h(y)$ exists. I don't see where I miss something. Any help is appreciated.

Image of the exponential map on the 2-Sphere at a point

Posted: 09 May 2022 02:39 PM PDT

Take the north pole of the standard 2-sphere in $\mathbb{R^3}$ equipped with the riemannian metric induced by the standard metric on $ \mathbb{R^3}$. What is the image of the exponential map of the tangent space at the north pole? I am having trouble with the definition of the exponential map making it hard for me to compute this. Thanks for any help!

How to show the recursive sequence $(x_n)\in\mathbb{R}$ such that $x_1>1,x_{n+1}=2-1/x_n$ is decreasing? (Please answer base case only!)

Posted: 09 May 2022 02:37 PM PDT

I'm working on Exercise $3.3.2$ from Introduction to Real Analysis by Bartle and Sherbert, which says:

Let $x_1>1$ and $x_{n+1}:=2-1/x_n$ for $n\in\mathbb{N}.$ Show that $(x_n)$ is bounded and monotone. Find the limit.

First, to get an idea of the behavior of the sequence to see whether it was increasing or decreasing, I took as an example $x_1=2$ and deduced that $x_2=1\frac{1}{2},x_3=1\frac{1}{3},x_4=1\frac{1}{4},$ etc., so it seems the sequence must be decreasing.

So now I'm trying to prove that $(x_n)$ is decreasing by proving the statement $P(n):x_{n+1}\leq x_n$ is true for all $n\in\mathbb{N}$ and I got stuck at the base case, $P(1):x_2\leq x_1$. If at all possible, I'd only like help with the base case of just the proof of monotonicity so that I can use that support as a springboard to figure out the rest of the problem on my own.

I deduced so far that since $x_1>1,-1/x_1>-1.$ (right?) So then

$$ x_2=2-\frac{1}{x_1}>2-1=1.$$

That only tells me that $x_2>1,$ and I don't think this really helps me show that $x_2\leq x_1.$ (However, this will help me prove that $(x_n)$ is bounded when that time comes...)

I also tried:

$$x_2=2-\frac{1}{x_1}=\frac{2x_1-1}{x_1}<2x_1-1<2x_1$$

But surely $x_2<2x_1$ doesn't imply that $x_2\leq x_1$...

Are there any other ways to show $x_2\leq x_1$?

To be clear, please only give me a hint for the base case above -- I want to try to figure out the rest on my own. Thank you in advance for your help! :)

$\alpha<\beth_\omega\implies 2^\alpha<\beth_\omega$

Posted: 09 May 2022 02:37 PM PDT

"Let $\alpha$ be a cardinal such that $\alpha<\beth_\omega$. Then $2^\alpha<\beth_\omega$".

I'm stuck, any suggestions or bibliographies of the $\beth$ function? :(

Inverse fourier transform of a rectangle

Posted: 09 May 2022 02:30 PM PDT

Suppose I have the function $f(x,y)=c\cdot \chi_{[0,1]\times[0,1]}$ for some $c\in \mathbb{R}$. Is it possible to find the inverse Fourier transform of this function? I am trying to use this to show a discretization of a Kernel of a Hilbert space is positive definite and to do that, I need the inverse fourier transform.

Why this is wrong?

Posted: 09 May 2022 03:11 PM PDT

For $x^x= 2$, hence the solution is :

$x\cdot \ln(x) = \ln(2)$

$x\cdot\int(1/x)\,dx = \ln(2)$

$\int(x/x)\,dx = \ln(2)$

$x+c = \ln(2)$, where $c = 0$

so, $x = \ln (2)$

Can someone tell me why this solution is wrong?

What function results from evaluating my function in the limit that n goes to positive infinity.

Posted: 09 May 2022 02:39 PM PDT

I am looking at a special case of the binomial PDF, where $$p = \frac{1}{n}$$ i.e., $$f(n,k) = \frac{n!}{k!(n-k)!} \left(\frac{1}{n}\right)^k \left(1-\frac{1}{n}\right)^{n-k}$$

I have observed empirically that in the limit that $n\rightarrow \infty$, the two-variable PDF appears to become a new one-variable PDF: $f(n,k)\rightarrow g(k)$.

I would like a closed form expression for this new function $g(k)$, if such an expression exists.

Given $p(x)$ to be a polynomial with non negetive integral coefficients, if $p(1)=5$ and $p(5)=151$, find possible values of $p(4)$.

Posted: 09 May 2022 02:49 PM PDT

Given $p(x)$ to be a polynomial with non negetive integral coefficients, if $p(1)=5$ and $p(5)=151$, find possible values of $p(4)$.

As there was no mention of the degree/number of terms of the polynomial in the question, I looked for to somehow bound the degree/number of terms in it.

As the coefficients can only be a whole number, the maximum number of coefficients should be 5 (even if there are other coefficients, they would be zero so they won't matter in the value of the polynomial). I tried making progress with this idea but got stuck.

Limit of integral for my homework

Posted: 09 May 2022 03:12 PM PDT

Hello and sorry for bothering you! But I really need help with my math homework… I have to evaluate this limit, can you help me please? Thank you so much!

https://i.stack.imgur.com/dG5nl.jpg

Clarification- Complements of a normal Hall Subgroup are Conjugate

Posted: 09 May 2022 02:40 PM PDT

While reading through "Permutation Groups" by Donald S. Passman, I came across a proof explaining why for a finite group $G$ with a normal Hall subgroup $H$, if we let $G/H$ be solvable, any two complements of $H$ are solvable. I understand up till defining a group $V=E \cap HU$ such that $V\mathrel{\unlhd}H$ and $V\cong U$.

I don't understand why we define $V$ in such a way. The proof is written below.

Let $G/H$ be solvable and let $B$ and $E$ be complements for $H$. We show that $B$ and $E$ are conjugate by induction on $|G|$. Let $U$ be a minimal normal subgroup of $B$. Since $B \cong G/H$ is solvable, U is a $p$-group for some prime $p$. Set $V=E \cap HU$ such that $V\mathrel{\unlhd}E$ and $V\cong U$. Clearly, $U$ and $V$ are Sylow $p$-subgroups of $HU$ and hence there exists $g \in HU$ with $V^g=U$. Thus $U\mathrel{\unlhd}B$ and $U\mathrel{\unlhd}E^g$, and both $B$ and $E^g$ are complements for $H \cap N$ in $N$, where $N=\mathfrak{N}(U)$. Since, $B/U$ and $E^g/U$ are complements for $(H \cap N)U/U$ in $N/U$ and $|N/U|<|G|$, induction implies that $B/U$ and $E^g/U$ are conjugate in $N/U$. Hence, $B$ and $E^g$ are conjugate in $G$ and the result follows. $\square$

Maximize a single variable function under a constraint

Posted: 09 May 2022 03:16 PM PDT

Let $c\in\mathbb{R}^*, a, b\in\mathbb{R}$ with $a,b>1$ and $a>b$. Let $n\in\mathbb{N}, n\ge 2$. Consider the function $$h(x) = x^{a-b}-\frac{x^{\frac{na}{n-a}-b}}{c}.$$

As an exercise for my math class, I need to maximize this function under the constraint $$x^a\le k,$$ where $k$ denotes a suitable positive constant. My question is: what does it mean? What I have to do exactly?

I think I need to find $x$ such that $h^{\prime}(x)=0$ and then compare with $x^a\le k,$ right?

I hope someone could help.

Thank you in advance!

Unsure on how to go about solving for speed here

Posted: 09 May 2022 03:05 PM PDT

Cities A and B are 70 miles apart. A biker leaves City A at the same time that a bus leaves City B. They travel toward each other and meet 84 minutes after their departure at a point between A and B. The bus arrives at City A, stays there for 20 minutes, and then heads back to City B. The bus meets the biker again 2 hours and 41 minutes after their first meeting. What are the speeds of the bus and the biker?

So far, I've created 2 variables Vbus=bus speed and Vbike=bike speed. Where they meet for the first time, I've marked that as distance C. From this I got 1.4 hrs = C/Vbike = (70–C)/Vbus. To describe the time in which the bus begins heading back to City B, I did (70/Vbus)+1/3 hrs which means the bike has traveled Vbike((70/Vbus)+1/3) by the time the bus leaves again. This is all I have so far but I'm still unsure if I'm even heading in the right direction.

The Stone-Cech compactification of a countable disjoint union of closed intervals

Posted: 09 May 2022 03:04 PM PDT

Let $X$ be the disjoint union of a countably infinite number of copies of $[0,1]$. Is $\beta X$ strictly bigger than $\beta{\bf N}\times [0,1]$?

I am thinking of a bounded sequence $(f_n)$ of continuous functions on $[0,1]$ converging pointwise to the characteristic function of $(\frac12, 1]$. Then this bounded sequence defines a bounded continuous function $f$ on $X$ and I am doubtful that $f$ can be extended continuously to $\beta{\bf N}\times [0,1]$. But I haven't been able to finish the argument or visualise the "missing" ultrafilters, if any.

Note: Glicksberg's Theorem states that for Tychonov spaces $X$ and $Y$, $\beta (X\times Y)$ is $\beta X \times\beta Y$ precisely when $X\times Y$ is the pseudocompact product of two infinite spaces (where $Z$ pseudocompact means that every continuous real-valued mapping on $Z$ is bounded). This is in Russell C. Walker, The Stone-Cech Compactification, chapter 8. The space $[0,1]\times {\bf N}$ is not pseudocompact, so there are definitely other $z$-ultrafilters. But what do they look like, i.e. how should one think about $\beta ([0,1]\times {\bf N})$?

Find all the complex roots of the equation $\cos z = 3$. [closed]

Posted: 09 May 2022 02:36 PM PDT

Please explain as i tried using eulers formula but could not do of a $\cos$ function.

Deterministic function that gives Snell envelope

Posted: 09 May 2022 02:52 PM PDT

I am studying for an exam and would love some hint on this review problem.

Suppose the discrete-time process $(S_t)_{0 \leq t \leq T}$ have $i.i.d.$ increments. Fix a measurable function $f:\mathbb{R} \to \mathbb{R}$ and define $Z_t = f(S_t)$, suppose $Z$ is integrable. Let $U$ be the Snell envelope of $Z$. Show that exists a deterministic function $V$ such that $U_t = V(t, S_t)$.

This is the second part of the problem. Previously I have shown that for a Snell envelope $U$ of $Z$, $U$ is a martingale if $Z$ is a submartingale (and that $U$ is a supermartingale from definition). I would like some help on the existence of $V$.

I am thinking of dividing the increments of $S$ into different cases (positive, zero, and negative expectation) as this makes $S$ into sub, super and true martingales.

How to represent the relative geometry of two ellipses with a common focus in GeoGebra?

Posted: 09 May 2022 02:55 PM PDT

I'm studying an astrodynamics problem and to help my study I'd like to represent the geometry I'm dealing with. I also obtained a figure in Matlab but I need to represent many angles and so I'd like to avoid to write too lines of code.

My objective is to represent two ellipses, (ell1 and ell2) with a common focus, in the perifocal reference frame of ell1 and this is the main problem that I have, beacuse in the cartesian coordinates of GeoGebra I am able to write the equation of ell1 as (x/a1)^2 + (y/b1)^2 = 1 but I'm having trouble correctly writing the cartesian equation of ell2; it should be something like (x-x0/a2)^2 + (y/b2)^2 = 1. Of course the focus is the same for both ellipses, let's assume F(c,0). In addition, I want to represent the second ellipse rotated of an angle equal to 120 deg about the focus wrt the first ellipse and their intersection point.

The data that I have are semimajor axes, eccentricities and the difference of the arguments of periapsides: a1 = 154448562.453878 km e1 = 0.0524440272062934 a2 = 143768968.725253 km e2 = 0.027842283175748 delta_omega = 120 deg

The data listed above correspond to a tangency condition between two ellipses shown in Matlab figure. It would be amazing to be able to represent in GeoGebra the ell1 fixed and the ell2 rotating and contemporary showing the intersection points (one or two depending on the relative geometry), hence to start with their apsides lines overlapped and then rotate the second one wrt the first one.

Here is the matlab figure that I would like to get: relative geometry

Here is the scheme I've obtainend up to now in GeoGebra: GeoGebra

My questions are:

  • Is it possible to use a perifocal reference frame in GeoGebra or do I have to use the cartesian one?
  • Can you help me to draw the problem described above?

Almost sure convergence of a sequence of Gaussians with variance sequence

Posted: 09 May 2022 02:53 PM PDT

I want to prove the following:

Consider a sequence of $(Z_n)_{n \in \mathbb{N}}$ of independent random variables such that $Z_n \sim \mathcal{N}(0,\sigma_n^2)$. Now let $S_n = \sum_{j=1}^n Z_i$ and $\gamma_n^2 := \sum_{j=1}^n\sigma_j^2$ and assume that $\lim_{n \to \infty} \gamma_n^2 < \infty$. Then for any $p > 2$ $\lim_{n \to \infty} n^{-1/p}S_n=0$ a.s.

Now, my first observation was that $S_n \sim \mathcal{N}(0,\gamma_n^2)$ as a sum of independent random variables and thus $S_n' := n^{-1/p}S_n \sim \mathcal{N}(0,\frac{\gamma_n^2}{n^{2/p}})$. Hence for any $\varepsilon > 0$ by the Markov inequality we get for $q \in \mathbb{N}$ with $q \geq p$

$\mathbb{P}(\vert S_n' \vert > \varepsilon) \leq \frac{\mathbb{E}(\vert S_n'\vert^q)}{\varepsilon^q} \leq \frac{\mu_q}{\varepsilon^q} \frac{(\gamma_n^{2})^q}{n^{2q/p}} \leq \frac{\mu_q}{\varepsilon^q} \frac{(\gamma_n^{2})^q}{n^{2}} $

where $\mu_q$ is the q-th moment of the standard Gaussian, see here. Thus

$\sum_{n=1}^\infty \mathbb{P}(\vert S_n' \vert > \varepsilon) \leq \frac{\mu_q}{\varepsilon^q} \sum_{n=1}^\infty\frac{(\gamma_n^{2})^q}{n^{2}} < \infty$

since $ \gamma_n^2 < c$ for sufficient large $n$, which implies $\lim_{n \to \infty} S_n' = 0$ a.s. by the Borel-Cantelli lemma.

First question: Does this look correct? And second: Is there a more elegant way to show that?

Propagating error through a Fast Fourier transform

Posted: 09 May 2022 02:28 PM PDT

I am trying to propagate the error associated with a Fast Fourier transform of $x_{n}$. I know the error (variance) for $x_{n}$. Then, I calculated the following quantity:

$$Y=Im\left ( i\omega FFT(x_{n})/length(x_{n}) \right )$$

I calculated the variance according to:

$$\sigma_{X_{k}}^{2}=\sum \frac{\sigma_{x_{n}}^{2}}{length(x_{n}))^{2}} $$

and then I think with propagation of error it should be

$$\sigma_{Y}^{2}=|i\omega|^{2}\sigma_{X_{k}}^{2}$$

but $\sigma_{Y}$ is a few orders of magnitude greater than $Y$, which makes me skeptical that I'm doing this correctly since the error for $x_{n}$ is an order of magnitude less than $x_{n}$. Is this truly how to propagate error through the FFT and then through this calculation for $Y$, or am I misunderstanding the propagation?

Differentiating with respect to a vector transpose

Posted: 09 May 2022 03:16 PM PDT

Suppose a normal random regression model. Then the normal equations by the method of least squares expressed in matrix forms look like: $$Q=(Y-X\beta)^T(Y-X\beta)$$ where Q is the quantity we would like to minimize to get the least square estimates. Y is a ($n$ x 1) vector, X is a ($n$ x 2) matrix, and $\beta$ is a (2 x 1) vector.

To actually obtain the least square estimates, we need to differentiate Q with respect to $\beta$ as follows:

  1. First expand the quantity Q: $$Q=Y^TY-\beta^TX^TY-Y^TX\beta+\beta^TX^TX\beta$$
  2. Utilize $Y^TX\beta = \beta^TX^TY$ and manipulate terms: $$Q=Y^TY-2\beta^TX^TY+\beta^TX^TX\beta$$
  3. Then take the derivative of Q with respect to $\beta$ and equate the result to the zero vector, $0$: $${\partial Q\over\partial\beta}=-2X^TY+2X^TX\beta=0$$

However, I can't see how the vector transpose $\beta^T$ disappeared in the above equation when the derivative was taken with respect to $\beta$ for Q.

I believe the steps I am missing are in elementary matrix calculus, but how do you differentiate an equation that contains both $\beta^T$ and $\beta$ in its terms (for example, $Q=Y^TY-2\beta^TX^TY-\beta^TX^TX\beta$ in our earlier example) with respect to a vector $\beta$?

How to find area of a polygon built on the roots of a given polynomial?

Posted: 09 May 2022 03:03 PM PDT

How to find the area of a (maximum area convex) polygon, built on the roots of a given polynomial in the complex plane?

For example, consider the equation:

$$2x^5+3x^3-x+1=0$$

It has one real and four complex roots and makes a nice convex pentagon in the complex plane (thanks, Wolfram Alpha):

enter image description here

Using the formula for the area of a convex polygon:

$$A=\frac{1}{2} \left( \begin{array}| x_1 & x_2 \\ y_1 & y_2 \end{array} + \begin{array}| x_2 & x_3 \\ y_2 & y_3 \end{array} + \dots + \begin{array}| x_n & x_1 \\ y_n & y_1 \end{array} \right)$$

I obtained for this case (using numerical values of the roots):

$$A=1.460144\dots$$


Another simple case - roots of unity. They just make regular polygons and the general formula for the area is well known.


However, I would like to know if it's possible to find out this area without computing the roots, using only the coefficients of the polynomial? (The coefficients are meant to be rational).

I know that polynomials with only real roots will all have $A=0$, and for the polynomials with several real roots some of them will be inside our maximum area polygon.

There is a useful theorem (see Rouche's theorem ), according to which:

For a monic polynomial $$z^n+a_{n-1} z^{n-1}+\dots+a_1 z+a_0$$

All its roots will be located inside the circle $|z|=1+\max |a_k|$.

But this theorem gives relatively large area, and can't be used to approximate the area of the polygon.

How to evaluate $\int_{0}^{\infty }\frac{\ln( 1+x^{4} )}{1+x^{2}}{d}x~,~\int_{0}^{1}\frac{\arctan x}{x}\frac{1-x^{4}}{1+x^{4}}{d}x$

Posted: 09 May 2022 02:45 PM PDT

How to evaluate these two integrals below $$\int_{0}^{\infty }\frac{\ln\left ( 1+x^{4} \right )}{1+x^{2}}\mathrm{d}x$$ $$\int_{0}^{1}\frac{\arctan x}{x}\frac{1-x^{4}}{1+x^{4}}\mathrm{d}x$$ For the first one, I tried to use $$\mathcal{I}'(s)=\int_{0}^{\infty }\frac{x^4}{(1+sx^4)(1+x^{2})}\mathrm{d}x$$ but it seems hard to solve.

Show that $f(F(x))F'(x)$ is measurable.

Posted: 09 May 2022 03:03 PM PDT

This is a equation from Stein-Sharkarchi Real Analysis.

Let $F$ be absolutely continuous and increasing on $[a,b]$ with $F(a)=A$ and $F(b)=B$. Suppose $f$ is any measurable function on $[A,B]$.

Show that $f(F(x))F'(x)$ is measurable on $[a,b]$.

I am really having a hard time starting this problem. I know that $F'(x)$ is definitely measurable, but $f(F(x))$ need not be.

Geometry Proof: Convex Quadrilateral

Posted: 09 May 2022 02:33 PM PDT

A quadrilateral $ABCD$ is formed from four distinct points (called the vertices), no three of which are collinear, and from the segments $AB$, $CB$, $CD$, and $DA$ (called the sides), which have no intersections except at those endpoints labeled by the same letter. The notation for this quadrilateral is not unique- e.g., quadrilateral $ABCD$ = quadrilateral $CBAD$.

Two vertices that are endpoints of a side are called adjacent; otherwise the two vertices are called opposite. The remaining pair of segments $AC$ and $BD$ formed from the four points are called diagonals of the quadrilateral; they may or may not intersect at some fifth point. If $X$, $Y$, $Z$ are the vertices of quadrilateral $ABCD$ such that $Y$ is adjacent to both $X$ and $Z$, then angle $XYZ$ is called an angle of the quadrilateral; if $W$ is the fourth vertex, then angle $XWZ$ and angle $XYZ$ are called opposite angles.

The quadrilaterals of main interest are the convex ones. By definition, they are the quadrilaterals such that each pair of opposite sides, e.g., $AB$ and $CD$, has the property that $CD$ is contained in one of the half-planes bounded by the line through $A$ and $B$, and $AB$ is contained in one of the half-planes bounded by the line through $C$ and $D$.

a) Using Pasch's theorem, prove that if one pair of opposite sides has this property, then so does the other pair of opposite sides.

b) Prove, using the crossbar theorem, that the following are equivalent:

  1. The quadrilateral is convex.
  2. Each vertex of the quadrilateral lies in the interior of the opposite angle.
  3. The diagonals of the quadrilateral meet.

I can't seem to make sense of a) at all. For b) I approached it in this way - I made three separate proofs:

  1. If the quadrilateral is convex, then the diagonals of the quadrilateral meet.

Proof: Assume quadrilateral $ABCD$ is a convex quadrilateral. We have to prove that segment $AC$ and segment $BD$ have a point in common. By the definition of a convex quadrilateral, $C$ is in the interior of angle $DAB$. Hence, ray $AC$ intersects segment $BD$ at some point $E$ (by the crossbar theorem). Therefore, $E$ is the required intersection point of the diagonals $AC$ and $BD$.

  1. If the diagonals of the quadrilateral meet, then each vertex of the quadrilateral lies in the interior of the opposite angle.

Proof:

  1. If each vertex of the quadrilateral lies in the interior of the opposite angle, then the quadrilateral is convex.

Proof:

I'm also confused over the proofs for 2. And 3.. Theorems and axioms that might be helpful:

Pasch's Theorem: If $A$, $B$, and $C$ are distinct points and $l$ is any line intersecting $AB$ in a point between $A$ and $B$, then $l$ also intersects either $AC$, or $BC$. If $C$ does not lie on $l$, then $l$ does not intersect both $AC$ and $BC$.

Interior an angle: Given an angle $\angle CAB$, define a point $D$ to be in the interior of $\angle CAB$ if $D$ is on the same side of Ray $AC$ as $B$ and if $D$ is also on the same side of Ray $AB$ as $C$. Thus, the interior of an angle is the intersection of two half-planes.

Crossbar Theorem: If ray $AD$ is between ray $AC$ and ray $AB$, then ray $AD$ intersects segment $BC$.

Any help at all would be much appreciated.

No comments:

Post a Comment