Friday, March 19, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Any embedded smooth submanifold of $\mathbf R^n$ has smooth coordinate patches

Posted: 19 Mar 2021 08:53 PM PDT

I believe that more or less this type of question has already appeared but when typing this question, I cannot find it. Nevertheless, this question concerns two types of definition of (smooth) manifolds: one for submanifolds of $\mathbf R^n$ and the other for abtract manifolds. Obviously, these definitions are generally not equivalent but I want to understand these in the situation of embedded smooth submanifolds of $\mathbf R^n$.

Let me recall these definitions for clarity.

Definition 1 (Munkres, etc). A subspace $M \subset \mathbf R^n$ is a $k$-manifold of class $C^r$ if for any point $p \in M$, there is an open set $V \subset M$, an open set $U \subset \mathbf R^k$ and a continuous map $\alpha : U \to V$ such as

  1. $\alpha \in C^r$ (in the extended sense)
  2. $\alpha^{-1}: V \to U$ is continuous
  3. $D\alpha$ has rank $k$ at any point in $U$.

Note that the continuity of $\alpha^{-1}$ in 2. is enough because $\alpha^{-1}$ is in fact of class $C^r$; see Thm 24.1 in Munkres's book. So, basically Definition 1 says that for any point $p \in M$, there is a diffeomorphism between an open set containing $p$ and an open subset in $\mathbf R^k$.

Definition 2 (Lee, etc). A smooth manifold $M$ of dimension $k$ is a topological space equipped with a smooth structure, which basically says that there is a family of local charts $(U, \phi)$ of $M$, where $U$ is open in $M$ and $\phi$ is a homeomorphism from $U$ onto an open subset of $\mathbf R^k$ such that the translation maps between two overlapped charts are smooth. (I do not want to write down in details because more or less this is standard and appears in many textbooks.)

Let us now consider an embedded smooth submanifold $M \subset \mathbf R^n$ of dimension $k$. (By definition, this means that the inclusion $i : M \to \mathbf R^n$ is a smooth embedding but here $M$ is a subset of $\mathbf R^n$ for simplicity.) My question is how to verify the three conditions in Definition 1. I hardly see how to construct $\alpha$ with enough regulariry around a given point since $\phi$ and $\phi^{-1}$ in Definition 2 are only continuous.

Geometric Expectation

Posted: 19 Mar 2021 08:53 PM PDT

Two coins with heads probabilities $\frac{1}{3}$ and $\frac{1}{4}$ are alternately tossed, starting with the $\frac{1}{3}$ coin, until one of them turns up heads. Let $X$ denote the total number of tosses, including the last. Find:

$$E(X)$$

I have seen a recent post that describes the answer as:

$$\mathbb{E}(X) = \displaystyle 1 + \frac{2}{3} (1 + \frac{3}{4} \mathbb{E}(X))$$

This still confuses me because how can you solve for $E(X)$ with $E(X)$ in the equation?

Find the probability density function of a dart.

Posted: 19 Mar 2021 08:49 PM PDT

Someone plays darts that has a join density function based on the surface z = 3 - r where z is the likelihood of the dart landing at $(r,\theta)$. How can I find the value of C so that f is a joint density function and then find the probability that the thrown dart lands in S?

$$g(r,\theta)= \begin{cases} C_{1}(3-r), & \text{if} (r,\theta)∈ R\\ 0, & \text{otherwise} \end{cases}$$

R is the region inside of r = 1 + tan $(\theta /4)$ (which includes the little loop) and S is just the region inside the inner loop.

So far this is how I am solving for C but how can I actually get the probability? Also is this correct so far?

enter image description here

What is a suitable choice of Lyapunov function for this system?

Posted: 19 Mar 2021 08:47 PM PDT

I have the following nonlinear system:

\begin{align}\dot x&=x(2-y^4) \\ \dot y &= -3x(1+y^3) \end{align}

I first verified this system using linearization; however, the eigenvalues are $\lambda =\pm i \sqrt{2} $ which is inconclusive for the stability of $(0,0)$. I tried to find a Lyapunov function to study its stability. I started with classical one $V(x,y) = a x^2 + b y^2$ by adjusting the coefficients $a,b$. However, it takes me hours but still no result. Any hints would be appreciated!! Thanks in advance.

Conditional expectancy proof

Posted: 19 Mar 2021 08:46 PM PDT

In the formal definition of conditional expectancy i have found this identity:
For any $A∈F$ we have $∫_A E(X|F) dP=∫_A Xd P$, that is $X=E(X|F)$, where X is a random variable
but not the proof for it, when i try to do it by myself i get that the conditional expectancy equals $E(X)$ , what am i doing wrong?

Calculating the value function

Posted: 19 Mar 2021 08:44 PM PDT

The utility function of a consumer is u= x1^(1/3)+x2^(2/3) and he seeks to minimize his budget. In this case x1 and x2 are the goods and p1 and p2 are their prices respectively. In this minimization problem I set it up in the following manner:

Minimized: p1x1 + p2x2 for u=(x1^1/3)*(x2^2/3)

Then I used the Lagrangian to find the values of x1,x2 and the the multiplier lambda. They turned out to be x1=u (p2/2p1)^1/3, x2=u (2p1/p2)^1/3 and the lagrange multiplier being 3u (p1x^2/3)(p2/2p1)^1/3. I am not sure if I calcuated the lagrange multiplier correctly. However, my main query is that I want to find the value function V and then evaluate dV/dpi where i=1,2. I understand what the value function means in brief but I am having difficulty understanding how to work out this portion from the aforementioned things I've calculated.

Find a function $f$ which is Riemann-integrable on $[0,T]$, and so that $\int_0^T f(t)^2 dt$ is infinite.

Posted: 19 Mar 2021 08:50 PM PDT

I am reading "Linear Algebra, Signal Processing, and Wavelets - A Unified Approach: Python Version" by Øyvind Ryan.

There is the following exercise in this book.

Exercise 1.18: Riemann-Integrable Functions Which Are Not Square Integrable
Find a function $f$ which is Riemann-integrable on $[0,T]$, and so that $\int_0^T f(t)^2 dt$ is infinite.

$\int_0^1 \frac{dt}{\sqrt{t}} = 2$, but $\int_0^1 \frac{dt}{t} = \infty$.
But do we say $f(x) = \frac{1}{\sqrt{x}}$ is riemann integrable on $[0,1]$?
I think if $f$ is riemann integrable on $[0,T]$, $f$ must satisfy the condition that $f$ is bounded on $[0,T]$ at least.
Is there really $f$ such that $f$ is bounded on $[0,T]$ and $\int_0^T f(t) dt$ exists and $\int_0^T f(t)^2 dt$ is infinite?

Morphisms of schemes arenot determined by their underlying map of points.

Posted: 19 Mar 2021 08:41 PM PDT

In vakil's notes, exercise 6.3.L state that morphism of schemes $X\rightarrow Y$ are determined by their induced maps.And not determined by their underlying map of points.

By Yoneda's lemma, $X\rightarrow Y$ determined by $h_X\rightarrow h_Y.$ What's the example that they are not determined by their underlying map of points.

Cyclic property of the root of a cubic polynomial

Posted: 19 Mar 2021 08:34 PM PDT

Let $f(x) = x^3 - 3x + 1$. Let $\alpha, \beta, \gamma \in \mathbb{R}$ be the roots of $f$ with $\alpha > \beta > \gamma$. Let $g(x) = x^2 - 2$. We see that $\beta = g(\alpha)$, $\gamma = g(\beta)$ and $\alpha = g(\gamma)$ by calculation using $f$.

Why does this cyclic property hold? What is the relation between $f$ and $g$? Could you tell me its background or generalization?

Pointwise and uniform convergence of $f_n(x)=\sqrt[n]{nx^2+1}$

Posted: 19 Mar 2021 08:37 PM PDT

Study the pointwise and uniform convergence of $f_n(x)=\sqrt[n]{nx^2+1}$ for $x\in\mathbb{R}$.

My attempt: let $x_0 \in \mathbb{R}$ fixed, it is $$\lim_{n\to\infty} \sqrt[n]{nx^2_0+1}=1$$ So $f_n$ converges pointwise to $f(x)=1$ for all $x\in\mathbb{R}$.

For the study of the uniform convergence, I have to evaluate $$\lim_{n \to \infty} \sup_{x \in \mathbb{R}}|\sqrt[n]{nx^2+1}-1|$$ The $n$-th root is an increasing function, so since $nx^2+1 \geq 1$ for all $x\in\mathbb{R}$ it is $\sqrt[n]{nx^2+1}-1 \geq 0$ and so $|\sqrt[n]{nx^2+1}-1|=(\sqrt[n]{nx^2+1}-1)$; moreover, it is $$\frac{\text{d}}{\text{d}x}(\sqrt[n]{nx^2+1}-1)=\frac{2x}{n}(nx^2+1)^{\frac{1-n}{n}} \geq 0 \iff x \geq 0$$ So the function $f_n-f$ is increasing function of $x$ and it is an unbounded function of $x$, since $$\lim_{x \to \infty} (f_n(x)-f(x)) = \lim_{x \to -\infty} (f_n(x)-f(x)) =\infty$$ So it is $$\sup_{x \in \mathbb{R}}(\sqrt[n]{nx^2+1}-1)=\infty$$ So $\lim_{n \to \infty} \sup_{x \in \mathbb{R}} |\sqrt[n]{nx^2+1}-1|=\infty$, this means that the $f_n$ doesn't converge uniformly to $f(x)=1$ in $\mathbb{R}$. However, given $a>0$ and considered the interval $[-a,a]$, by the same argument of monotonicity it is $$\sup_{x \in [-a,a]} |\sqrt[n]{nx^2+1}-1|=\sqrt[n]{na^2+1}-1 \to 0 \ \text{as} \ n \to \infty$$ So $f_n$ converges uniformly to $f(x)=1$ in every compact subset $[-a,a]$ of $\mathbb{R}$.

My questions are the following:

  1. When I study the positivity of the derivative, I omit the term $(nx^2+1)^{\frac{1-n}{n}}$ because I see it as an exponential and so I conclude it is always positive. I have in general problems with the $n$-th power and $n$-th roots, is this correct?

  2. When I study the uniform converge in $\mathbb{R}$ I have to take the limit of a supremum which is already $\infty$; this never happened in my previous study, so I'm unsure about this step. It has some sense to take the limit of something that is already $\infty$?

  3. The fact that the convergence is uniform in every compact subset of $\mathbb{R}$ but it can't be extended in $\mathbb{R}$ is because when we introduce $a>0$ for defining the interval $[-a,a]$ we are fixing $a$ and so we can conclude with the estimation on the supremum while the elements of $\mathbb{R}$ are considered elements that vary without a fixed bound (unlike $a$) and this makes the difference for the uniform convergence? Or is there a deeper reason why the uniform convergence fails in $\mathbb{R}$ but holds in every compact subset of $\mathbb{R}$?

Thank you.

Is Lebesgue point always finite value?

Posted: 19 Mar 2021 08:43 PM PDT

For a locally integrable function in $\Bbb{R}^d$.A point $\bar{x}\in \Bbb{R}^d$ lies in the Lebesgue set of $f$ if:

$$\lim _{m_{\bar{x}} \in \vec{B} \atop 0} \frac{1}{m(B)} \int_{B}|f(y)-f(\bar{x})| d y=0$$

The question is :Is the Lebesgue point always has finite value ($f(\bar{x})<\infty$)?

In Stein's real analysis book,this point is specified,but in Rudin's and Folland's book,this point is not specified,but I guess it's just an implicit assumption here since if it's not the $|f(y)-f(\bar{x})|$ may not be well defined?Correct?

Find tight bound of the recurrence equation

Posted: 19 Mar 2021 08:24 PM PDT

Recurrence equation here

I'm not sure whether I'm correct...I used Master's Theorem for cases where n>1 to get theta(n(log^2(n))). Should I just ignore the case for n = 1? since 1 is smaller than nlog^2(n).

If possible, is there any other methods to find the tight bound of this piecewise function (besides Master Theorem) ?
Thank you.

$0$ element of $A\otimes \mathbb{Q}$

Posted: 19 Mar 2021 08:47 PM PDT

I am studying tensor product and I feel that it is not easy to distinguish $a\otimes b=0$ or $\neq 0$.

I'm thinking the following special case and it should be true.

Let $A$ be a abelian group, in other word a $\mathbb{Z}$ module, and $a\otimes 1 \in A\bigotimes_{\mathbb{Z}} \mathbb{Q}$. Then, $a\otimes 1 =0$ if and only if there exists some $n\in \mathbb{Z}$ such that $na=0$.

I think "if part" is trivial, but the other part is difficult. I want a proof of this claim or a counter examaple.

Halmos's proof of Schröder-Bernstein theorem

Posted: 19 Mar 2021 08:29 PM PDT

In the first paragraph of Halmos's proof of the Schröder-Bernstein theorem, he states:

It is convenient to assume that the sets $X$ and $Y$ have no elements in common; if that's not true, we can so easily make it true that the added assumption involves no loss of generality.

I do not understand this point. How can we make it true?

As another question on this same theorem, a number of proofs I have seen talk about successively applying $f$ and $g$, injections $A \to B$ and $B \to A$, respectively, to get functions of the form $(f \circ g)^n$. However, the Schröder-Bernstein theorem can apply to any sets, countable or not, so how is this not sacrificing generality and assuming that the infinities in question are countable? For example, this proof https://artofproblemsolving.com/wiki/index.php/Schroeder-Bernstein_Theorem from the Art of Problem Solving website defines an element $b_1 \in B$ as a descendant of $b_0 \in B$ if $b_1 = (f \circ g)^n (b_0)$, but I may. have needed to apply $f$ and $g$ uncountably infinitely many times (can I even do that?) if there were uncountable many "lonely" points ($b \in B$ so that there is no $a \in A$ with $f(a) = b$.)

differences between Ideals and principal ideals in math notational descriptions

Posted: 19 Mar 2021 08:33 PM PDT

In the following post: (Confusion between principal ideal and ideal) on clarification between the concepts of ideals and principal ideals, @Yury stated: "... if $I$ is a principal ideal then every element of $I$ is a multiple of $𝑎$ (for some fixed $𝑎\in I$) (2) Every ideal contains a principal ideal but not the other way around." I try to write out precisely what it means by not every ideal is a principal ideal in mathematical notation.

What I would like is to see if I can phrase what Yury stated in mathematical notations. I know this may sound a bit pedantic. I just would like to make sure I am crystal clear if I cross all the Ts and dot all the Is when it comes to formulating it in terms of the correct quantifiers.

Definition of an Ideal:

An $\textbf{ideal}$ of a ring $R$ is a subring $I$ of $R$ such that for all $x\in R$ and $y\in I$, both $xy \in I$ and $yx \in I$

Definition of Principal Ideal:

Let $R$ be a commutative ring with a unit element. An ideal $I$ of $R$ is $\textbf{principal}$ if there exists $d\in R$ such that $I=(d)= \{rd\mid r\in R\}.$ In this case, $d$ is said to $\textbf{generate} I$

Yury's statement part (1) would mean: if we given a commutative ring $R$ with unit element, for any ideal $I$ of $R$, there exists an $a\in R$ such that $(a) \subset I$, where $(a)=\{ar: \forall r \in R\}$ contains principal ideals trivially. For part (2) of his statement, in mathematical notation, it translate to: There exists an ideal $I$ of $R$ such that $I \not\subset (a)$, meaning there exists an ideal $I$ of $R$ and a $x \in I$ $x \neq ar$ for all $a$, $r \in R$
I am not certain if how I put Yury's statements in mathematical notations is accurate. IF someone can comment and point out any errors, it will be much appreciated. Thank you in advance

applying congruences

Posted: 19 Mar 2021 08:49 PM PDT

I have a couple problems I am struggling with.

  1. Use congruences to show that $3x^2-4y=5$ has no integer solutions.
  2. Let p be a prime number. Find all solutions for $x^2\equiv x \pmod{p}$
  3. Use congruences to show that for odd integers $n$ and provided $3$ does not divide $n$, then $n^2\equiv 1\pmod{24}$

My attempt:

  1. With $3x^2=4y+5$, $3x^2$ acts as our dividend, $4$ as our divisor, and $5$ as our remainder. Then $5\equiv 3x^2 \pmod 4$. By the division algorithm the remainder must be between 0, inclusive, and the divisor, exclusive. Since $5 \not < 4$ then there cannot be any integer solutions for $x,y$.
  2. Since $2$ is prime, then by Fermat's Theorem $x^2\equiv x \pmod 2$. Then $x^2-x\equiv 0 \pmod{2}$, which means $2\vert x^2-x$. I did this by cases for even and odd integers $x$, and found that $2\vert x^2-x$ is always divisible by 2. Then $x^2-x \equiv 0 \pmod 2$ must be true, thus $x^2\equiv x \pmod p$ is satisfied for any $x\in Z$ with $p=2$.
  1. I have no idea how to start here. I tried a contrapositive proof, but wasn't able to get anywhere.

A triple integral over the unit ball

Posted: 19 Mar 2021 08:39 PM PDT

I'm trying to find the following triple integral over the unit ball.

$$\iiint_{x^2 + y^2 + z^2 \leq 1} e^{(1-x^{2}-y^{2})^{3/2}} {\rm d} x \, {\rm d} y \, {\rm d} z$$

I am able to find a suitable parametrization but I can't get the bits in the exponent in terms of $r$. Any hints would be appreciated.

How do we solve the integral $\int \frac{1}{x^2-y}\mathrm{d}x$?

Posted: 19 Mar 2021 08:53 PM PDT

The integral to be solved is given by:

$$ I = \int \frac{1}{x^2-y}\mathrm{d}x$$

I was wondering what integral substitution I would need to make. I looked at symbolab and it directed me to use $x = u\sqrt{y}$ as a substitution, but where does it come from?

Evaluate $\int_{0}^{\pi/2}\cos^{2n}(x)\text{d}x$.

Posted: 19 Mar 2021 08:41 PM PDT

I try this. Notice that, $$ \begin{split} \cos^{2n}x &= \left(\frac{e^{ix}+e^{-ix}}{2}\right)^{2n} = \frac{1}{2^{2n}} \sum_{k=0}^{2n} \binom{2n}{k}e^{ikx}e^{-i(2n-k)x} \\ &= \frac{1}{2^{2n}} \sum_{k=0}^{2n} \binom{2n}{k}e^{i(2k-2n)x} \end{split} $$ The terms with $k\ne n$ integrate to zero over $[0,\pi/2]$, and we are left with $$ \int_0^{\pi/2}\cos^{2n}x \,dx = \int_0^{\pi/2}\frac{1}{2^{2n}}\binom{2n}{n} \,dx = \frac{\pi}{2^{2n+1}}\binom{2n}{n} $$ I'm right or not?

How calculate the trace norm via convex optimization?

Posted: 19 Mar 2021 08:49 PM PDT

I am taking the convex optimization course by CMU (though I am not a CMU student), and got stuck on this problem.


Formally, show that computing $\left \| X \right \|_{tr}$ can be expressed as the following convex optimization problem: $$\begin{array}{ll} \underset{{Y \in \mathbb{R}^{m \times n}}}{\text{maximize}} & \mbox{tr} \left( X^T Y \right)\\ \text{subject to} & \begin{pmatrix} I_{m} & Y \\ Y^{T} & I_{n} \end{pmatrix} \succeq 0\end{array}$$ where $I_{p}$ is the $p \times p$ identity matrix.

Any help would be appropriated.

How to compute the time of upward movement of a Brownian motion

Posted: 19 Mar 2021 08:51 PM PDT

Suppose $B(t)$ is the standard Brownian motion. Does it make sense to compute the total time of upward movement $$\int_0^t I(dB_\tau>0)d\tau$$ where $I$ is the characteristic function? What is a rigorous way to define it if it does make sense? Intuition dictates this should be $\frac t2$. I am aware of Doob's upcrossing lemma. Perhaps we can use local time and Tanaka's formula. But I do not have a way to define my integral in similar terms.

Formulas for percentage gain/loss equivalent to formula for percentage change?

Posted: 19 Mar 2021 08:49 PM PDT

Can you please tell me whether the percentage change formula below can be used in place of both the percentage gain formula and the percentage loss formula below? Thank you in advance.

I have seen that percentage gain can be calculated using this formula:

$$\frac{2\text{nd value}}{\text{original value}} (-1) (\cdot 100)$$

e.g. original value = 10, 2nd value = 12 12/10 -1 x 100 = 20% percentage gain

and percentage loss as follows:

$$1 - \frac{\text {second value}}{\text{original}} \cdot 100$$

e.g. original value = 12, 2nd value = 10 1- 10/12 x 100 = 20% loss

Do their results differ from the result the following percentage change formula gives? Or can this be used to replace both formulas? : $$\frac{\text{new value} - \text{old value}}{\text{new value}}$$

How many sequences of length 10 with elements $\{a, b, c, d\}$ have exactly $3$ out of $4$ elements?

Posted: 19 Mar 2021 08:42 PM PDT

My logic is since $3$ out of $4$ elements are chosen, each element would appear once. So a sequence would look like: $a\,b\,c\,x\,x\,x\,x\,x\,x\,x$

We have $7$ spots $x$ that can be whatever elements so: $3^7$

Choosing $3$ out of $4$ elements: $4\choose 3$$ = 4$ So in total, there are $4\cdot(3^7)$ sequences

Can someone tell me if this is right or not? Thanks

This parametrised algebraic inequality in 3 variables is: Probably true! Provably true?

Posted: 19 Mar 2021 08:26 PM PDT

Let $p$ be a positive parameter in the range from zero to two.

Can one prove that $$2 +\sqrt{\frac p2} \;\leqslant\;\sqrt{\frac{a^2 + pbc}{b^2+c^2}} \,+\,\sqrt{\frac{b^2 +pca}{c^2+a^2}}\,+\,\sqrt{\frac{c^2 +pab}{a^2+b^2}}\quad?\tag{1}$$ Where $\,a,b,c\in\mathbb R^{\geqslant 0}\,$ and at most one variable equals zero.

The inequality $(1)$ is homogeneous of degree zero with regard to $a,b,c$.
Equality occurs if two variables coincide and the third one is zero.

To provide some plausibility to $(1)$ the two boundary cases $\,p=2\,$ and $\,p=0\,$ are proved:

$p=2$ is the harder bit.
W.l.o.g. assume $\,a\geqslant b\geqslant c\,$ and $\,a,b>0$. Let $u=\sqrt{\frac ab}\,+\,\sqrt{\frac ba}$, then $2\leqslant u$, and $u=2$ iff $a=b$.
I)$\:$ Let's show that $$u\:\leqslant\:\sqrt{\frac{a^2 + bc}{b^2+c^2}} \,+\,\sqrt{\frac{b^2 +ac}{a^2+c^2}}\:.\tag{2}$$ The following expression is positive: $$\begin{align} & \frac{a^2+bc}{b^2+c^2} -\frac ab & +\quad &\frac{b^2+ac}{a^2+c^2} -\frac ba\\[2ex] =\;\; & \frac{ab(a-b)+b^2c-ac^2}{b(b^2+c^2)} & +\quad &\frac{-ab(a-b)+a^2c-bc^2}{a(a^2+c^2)}\tag{3}\\[2ex] \geqslant\;\; & \frac{ac(a-c)+bc(b-c)}{a(a^2+c^2)} \;\geqslant 0 \end{align}$$ The first summand in $(3)$ has been diminished by increasing the denominator, while its numerator $\,ab(a-b) +b^2c -ac^2 =(a-b)(b-c)(a-c) + c(a-c)^2 + c^2(b-c)\,$ cannot get negative.

$(2)$ now follows from $$\begin{split} u^2\:=\:\frac ab + 2 +\frac ba \: & \leqslant\:\frac{a^2+bc}{b^2+c^2} + 2\,\underbrace{\sqrt{\frac{a^2+bc}{a^2+c^2}}}_{\geqslant 1}\;\underbrace{\sqrt{\frac{b^2+ac}{b^2+c^2}}}_{\geqslant 1}+ \frac{b^2+ac}{a^2+c^2}\\[2ex] & =\:\left(\sqrt{\frac{a^2 + bc}{b^2+c^2}} +\sqrt{\frac{b^2 +ac}{a^2+c^2}}\:\right)^2 \end{split}$$ II)$\:$ The remaining square root summand in $(1)$ is also bounded below in terms of $u$ since one has $$\frac 1{u^2-2} \:=\:\frac{ab}{a^2+b^2}\quad\implies\quad \sqrt{\frac 2{u^2-2}} \:\leqslant\: \sqrt{\frac{c^2 +2ab}{a^2+b^2}}$$
III)$\:$ Applying $3$-AGM finally proves $(1)$: $$\begin{split}\sum_\text{cyc}{\sqrt\frac{a^2 + 2bc}{b^2+c^2}} \;\geqslant\; u+\sqrt{\frac 2{u^2-2}} &\:=\:\sqrt{\frac{u^2}4} +\sqrt{\frac{u^2}4} +\sqrt{\frac 2{u^2-2}}\\[2ex] &\:\geqslant\:3\sqrt{\left(\frac{u^4}{8(u^2-2)}\right)^{1/3}} \:\geqslant\:3\end{split}$$


$p=0$ is more relaxing.
Only $2$-AGM in the form $\,a\sqrt{b^2+c^2}\leqslant\frac12\left(a^2+b^2+c^2\right)$ is needed: $$\frac a{\sqrt{b^2+c^2}} + \frac b{\sqrt{c^2+a^2}}+ \frac c{\sqrt{a^2+b^2}} \;=\;\sum_\text{cyc}\frac{a^2}{a\sqrt{b^2+c^2}} \;\geqslant\;\sum_\text{cyc}\frac{2a^2}{a^2+b^2+c^2} \;=\;2$$


$0<p<2$ returns to the question.
With just some ideas how to catch the "remaining" $p$-values:

  • The above method for $p=2$ may possibly be stretched down until the $p=1$ instance: $$2 +\frac{\sqrt 2}{2} \;\leqslant\;\sum_\text{cyc}{\sqrt\frac{a^2+bc}{b^2+c^2}}$$
  • Interpolation with regard to $p$ (more a buzz word than substantial ...)
  • A concavity argument as the two end points $p=0$ and $p=2$ are known: Could proving the second derivative with respect to $p$ being negative path a way towards a proof?

Expected size of largest connected component

Posted: 19 Mar 2021 08:24 PM PDT

I have a sample space of all the undirected graphs having n vertices and m edges. Then what is the expected value of the size of the largest connected component?

My original problem is that using the G(n, p) model (family of graphs having n vertices and any two vertices would be joined by an edge with probability p. See: Erdős–Rényi model), I need to calculate the expected size of the largest connected component in G(n, p). If the above problem can be solved then this original problem can be easily solved.

PS: I need to exactly calculate the value of the expectation for any fixed n and p given in the input and not just upper or lower bounds.

If $\lim_{x \to 0}g(x)=L$. How do I prove $\lim_{h \to 0} g(f(x+h)-f(x)) = \lim_{y \to 0}g(y)$

Posted: 19 Mar 2021 08:44 PM PDT

If I know $\lim_{x \to 0}g(x)=L$. For my particular problem $L=0$. So I've been trying to prove: $\lim_{h \to 0} g(f(x+h)-f(x)) = \lim_{y \to 0}g(y)=0$ using $\epsilon-\delta$.
(Also I'm assuming the $g(x)$ is continuous at $0$ and $f(x)$ is continuous at $x$)

So I know that $\forall\epsilon$, there is some $\delta$ such that $0<|k|<\delta\Rightarrow|g(k)-L|<\epsilon$ (1)

I have to prove that $\forall\epsilon_1$,$\exists\delta_1$ such that $0<|h|<\delta_1\Rightarrow|g(f(x+h)-f(x))-L|<\epsilon_1$

If $\exists\epsilon=\epsilon'$, and $\delta=\delta'$ satisfies (1), such that $0<|k|<\delta'\Rightarrow|g(k)-L|<\epsilon'$ (2)

If I to find value for $\delta_1$, when $\epsilon_1=\epsilon'$ such that $0<|h|<\delta_1\Rightarrow|g(f(x+h)-f(x))-L|<\epsilon'$ (3)
I can prove $\forall\epsilon_1,\exists\delta_1$.

If I take $f(x+h)-f(x)=k$, then RHS of (2) and (3) are same. $$\Rightarrow0<|f(x+h)-f(x)|<\delta'\Rightarrow|g(f(x+h)-f(x))-L|<\epsilon'$$ So I have to find $\delta_1$ such that $0<|h|<\delta_1\Rightarrow0<|f(x+h)-f(x)|<\delta'$?

This seems similar to another $\epsilon-\delta$ type condition but...I'm not sure.
I know $\lim_{h\to0}f(x+h)=f(x)$, since I assumed $f(x)$ is continuous at $x$.
Since $|h|>0\Rightarrow f(x+h)\neq f(x)\Rightarrow|f(x+h)-f(x)|>0$? If this is true, I think it has been proved.
(Also please check if there are any other mistakes or misassumptions)

Laplace transform involving derivatives and exponentials

Posted: 19 Mar 2021 08:46 PM PDT

In the Laplace domain, we have this: $$X_1(s) = \frac{1}{s+4}(U(s)-X_2(s))$$ I want to get this by: $$\dot x_1(t) = -4x_1(t) + u(t)-x_2(t)$$ This is done by rearranging the first equation into: $$(s+4)X_1(s) = U(s)-X_2(s)$$ Then take the Inverse Laplace Transform on both sides. However, I would like know if I could achieve the same thing without rearranging the equation 1 first. Taking the Inverse Laplace Transform directly on both sides of the first equation resulting: $$x_1(t) = e^{-4t}(u(t)-x_2(t))$$ Then take derivative with respect to $t$ on both sides: $$\dot x_1(t) = -4e^{-4t}(u(t)-x_2(t)) + e^{-4t}(\dot u(t)-\dot x_2(t))$$ $$\dot x_1(t) = -4x_1 + e^{-4t}(\dot u(t)-\dot x_2(t))$$

I got stuck here. If this cannot be converted to the desired form, what is the reason for this?

How to show that $(F(S),\circ)$ is a semigroup.

Posted: 19 Mar 2021 08:47 PM PDT

Let $F(S)$ be the set of all fuzzy subsets of a semigroup $S$. Define the binary operation $\circ$ on $F(S)$ by for all $f_1,f_2\in F(S)$, \begin{equation*} (f_1\circ f_2)(x)= \begin{cases} \displaystyle{\sup_{x=a_1a_2}}\big\{\min\{f_1(a_1),f_2(a_2)\}\big\} & \text{if } x=a_1a_2\,\, \text{ for some}\,\,a_1,a_2\in S,\\ 0 & \text{otherwise} \end{cases} \end{equation*} for all $x\in S$. How to show that $(F(S),\circ)$ is a semigroup.

Let $(a_n)_{n=1}^\infty$ be an infinite sequence of complex numbers. Prove the following limit.

Posted: 19 Mar 2021 08:50 PM PDT

Given $$\lim_{x\to \infty} \frac 1x \sum_{n\le x} a_n = k,$$ I want to prove that $$\lim_{x\to \infty} \frac {1}{\log x} \sum_{n\le x} \frac {a_n}{n} = k.$$

I'm much more interested in learning the technique(s) necessary in order to prove this rather than a direct proof. Specifically, we have learned about asymptotic estimates and summation by parts, but I don't see how I can use those techniques to prove the problem statement. Thank you for any insight!

How to determine the characteristic polynomial of the $4\times4$ real matrix of ones? [duplicate]

Posted: 19 Mar 2021 08:49 PM PDT

$$\left[\begin{array}{l}1&1&1&1\\1&1&1&1\\1&1&1&1\\1&1&1&1\end{array}\right]$$

I am having difficulties with calculating this. Currently I'm stuck at finding the determinant of the following matrix:

$$\left[\begin{array}{l}\lambda-1&-1&-1&-1\\-1&\lambda-1&-1&-1\\-1&-1&\lambda-1&-1\\-1&-1&-1&\lambda-1\end{array}\right]$$

Is there a "smart" way/trick to get the result or should I just crunch the numbers through Gaussian elimination? Normally it is not that difficult to do this, but if all the elements of the matrix are the same, it somehow got me confused.

No comments:

Post a Comment