Sunday, November 21, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Help pls with anything thank you

Posted: 21 Nov 2021 11:01 PM PST

Parameterization methods for Lagrange and Clero equations. Find the solution of the equation enter image description here

Write the general solution of a linear homogeneous systementer image description here

Find the general solution of the equation if one partial solution of the homogeneous equation is knownenter image description here

Find the general solution of the equationenter image description here

Hartshorne Exercise II 6.11 (c): Grothendieck group of a nonsingular curve

Posted: 21 Nov 2021 10:54 PM PST

Exercise:

Let $X$ be a nonsingular curve over an algebraically closed field $k$.

(c) If ${\mathscr{F}}$ is any coherent sheaf of rank $r$(means that its stalk at the generic point has dimension $r$ as a $K(X)$-vector space), show that there is a divisor $D$ on $X$ and an exact sequence $0 \rightarrow \mathscr{L}(D)^{\oplus r}\to \mathscr {F} \rightarrow \mathscr J \rightarrow 0$, where $ \mathscr J$ is a torsion sheaf.(means that its stalk at the generic point is $0$. )

Next is the idea of an answer I have read:

To construct the injective morphism, take a basis for the $K(X)$-vector space $\mathscr{F}_{\xi}$, and find a suitable $\mathscr{L}(D)$ such that this basis gives global sections of $\mathscr{L}(D) \otimes \mathscr{F}$. This defines a morphism $\mathcal{O}_{X}^{\oplus n} \rightarrow \mathscr{L}(D) \otimes \mathscr{F}$ which we show to be injective, and then tensor everything with $\mathscr{L}(D)^{-1}$. Since locally free sheaf are flat at stalks, we are done. But I have troubles in conducting it(How to find such a proper $D$ and get the global sections?).

If you do it in this way, could you post the complete answer? Besides, any other answers are welcome. Thanks!

Diffeomorphism between $\mathbf S^2 \times \mathbf S^2$ and complex projective hypersurface

Posted: 21 Nov 2021 10:53 PM PST

I want to show that $\{[{z_0:z_1:z_2:z_3]:z_0^2+z_1^2+z_2^2+z_3^2=0}\}$ and $\mathbf S^2 \times \mathbf S^2$ are diffeomorphic. How can I construct a diffeomorphism? I tried to solve this problem by using the fact that $\mathbb {CP}^1$ and $\mathbf S^2$ are diffeomorphic, but I failed. ($\mathbb {CP}^1$ is complex projective line.)

Find the value of ${{{\cos }^{ - 1}}\left( {\sqrt {\frac{{2 + \sqrt 3 }}{4}} } \right)}$

Posted: 21 Nov 2021 10:47 PM PST

Solve ${\sin ^{ - 1}}\cot \left( {{{\cos }^{ - 1}}\left( {\sqrt {\frac{{2 + \sqrt 3 }}{4}} } \right) + {{\cos }^{ - 1}}\left( {\frac{{\sqrt {12} }}{4}} \right) + \cos e{c^{ - 1}}\left( {\sqrt 2 } \right)} \right) = $

My solution is as follow

$T = {\sin ^{ - 1}}\cot \left( {{{\cos }^{ - 1}}\left( {\sqrt {\frac{{2 + \sqrt 3 }}{4}} } \right) + {{\cos }^{ - 1}}\left( {\frac{{\sqrt {12} }}{4}} \right) + \cos e{c^{ - 1}}\left( {\sqrt 2 } \right)} \right) = $

$\cos e{c^{ - 1}}\left( {\sqrt 2 } \right) = {\sin ^{ - 1}}\left( {\frac{1}{{\sqrt 2 }}} \right) = \frac{\pi }{4};{\cos ^{ - 1}}\left( {\frac{{\sqrt {12} }}{4}} \right) = {\cos ^{ - 1}}\left( {\frac{{\sqrt 3 }}{2}} \right) = \frac{\pi }{6}$

$T = {\sin ^{ - 1}}\cot \left( {{{\cos }^{ - 1}}\left( {\sqrt {\frac{{2 + \sqrt 3 }}{4}} } \right) + \frac{\pi }{4} + \frac{\pi }{6}} \right)$

Not able to proceed further

Orthogonal compliment of primitive sublattice is primitive

Posted: 21 Nov 2021 10:38 PM PST

I am learning about lattices and came across the following proposition.

Let $L$ be a even unimodular lattice, i.e., a pair of free abelian group and nondegenerate bilinear form $\langle,\rangle$ which satisfies

$$ \langle x,x \rangle \text{ is even } \ \ \ \ \text{for all }\ x \in L $$ and $$ L^* \cong L. $$

Let $S$ be a sublattice of $L$. The orthogonal complement of $S$ is defined by $$ S^\perp := \{l \in L \mid \langle l,s \rangle =0 \ \forall s \in S \}. $$

Proposition If $S$ is primitive (i.e. $L/S$ is torsion-free), then so is $S^\perp$.

How do we prove this proposition? Thank you very much.

Hartshorne Exercise II 6.9 (a):Picard group of a singular curve.

Posted: 21 Nov 2021 10:33 PM PST

Exercise:

*6.9. Singular Curves. Here we give another method of calculating the Picard group of a singular curve. Let $X$ be a projective curve over $k$, let $\tilde{X}$ be its normalization, and let $\pi: \tilde{X} \rightarrow X$ be the projection map (Ex. 3.8). For each point $P \in X$, let $\mathcal{C}_{P}$ be its local ring, and let $\tilde{C}_{P}$ be the integral closure of $\mathcal{C}_{P}$. We use a $*$ to denote the group of units in a ring.

(a) Show there is an exact sequence $$ 0 \rightarrow \bigoplus_{P \in X} \tilde{\mathcal{O}}_{P}^{*} / \mathcal{O}_{P}^{*} \rightarrow \operatorname{Pic} X \stackrel{\pi^{*}}{\rightarrow} \operatorname{Pic} \tilde{X} \rightarrow 0 $$ [Hint: Represent Pic $X$ and Pic $\tilde{X}$ as the groups of Cartier divisors modulo principal divisors, and use the exact sequence of sheaves on $X$ $$ \left.0 \rightarrow \pi_{*} \mathcal{O}_{\tilde{X}}^{*} / \mathcal{O}_{X}^{*} \rightarrow \mathscr{K}^{*} / \mathcal{O}_{X}^{*} \rightarrow \mathscr{K}^{*} / \pi_{*} \mathcal{O}_{\bar{X}}^{*} \rightarrow 0 .\right] $$

The exact sequence in the hint is trivial, and I have proved that $\pi_*\mathcal O_{\tilde X}^*/\mathcal O_X^*\simeq\oplus_{P\in X} \tilde{ \mathcal O}_P^*/ \mathcal O_P^*$. But then I cannot prove anything...(If you need the proof of the isomorphism I can provide it.)

Could you give a complete answer or some hints? THANKS!

Confused about the domain of this function

Posted: 21 Nov 2021 10:36 PM PST

The textbook seeks the domain of $f/g$, where $ f(x)= \sqrt {x }$ and $g(x) = |x-3|$. The answer stated is $(0,\infty]$. I have two questions here:

  1. Shouldn't $3$ be excluded from the domain as one can't divide by zero?
  2. Is it okay to use square brackets for infinity? I have never seen infinity included in the domain of any function before, and my teacher clearly stated that infinity is always followed by a parenthesis, not a square bracket. The answer to a similar problem seeking the domain of $g(x) = |x-3|$ is (-$\infty$,$\infty$]. How? Please explain this as well.

Second derivative: how should one think about?

Posted: 21 Nov 2021 10:27 PM PST

I am little-bit familiar with higher order derivatives of functions of a real variable; but I got confused when I tried to see analogues of them in multivariables.

Let $f:\mathbb{R}^m\rightarrow \mathbb{R}^k$ be a function. It is differentiable at $\mathbf{a}\in\mathbb{R}^m$ if there is a linear map $T_{\mathbf{a}}:\mathbb{R}^m\rightarrow\mathbb{R}^k$ such that $ \lim_{|\mathbf{h}|\rightarrow 0} \frac{|f(\mathbf{a}+\mathbf{h})-f(\mathbf{a})-T_{\mathbf{a}}(\mathbf{h})|}{|\mathbf{h}|}=0. $ In this case, we call $T_{\mathbf{a}}$ the derivative of $f$ at $\mathbf{a}$.

Now, if we have a function $\varphi(x)=x^2$ (of one variable), usually we write its derivative at $a$ to be $2a$. In above language, it is a linear map $$T_a:h\mapsto 2a h$$ Then what is correct way to think about the function $\varphi'$ on $\mathbb{R}$? Is it he map $a\mapsto T_a$? If yes, how can we talk about $(\varphi')'$?

I tried to think about second derivative of multivariable function from single variable function, but confused in expressing derivative as a function in multivariable case, and then think of its derivative. Can one help to solve it?

How do I do the E(UV)? If i assume independence, then cov would be zero and corr would 0.

Posted: 21 Nov 2021 10:15 PM PST

enter image description here

I'm trying to calculate figure out how to do E(UV) to find the covariance. If I assume independence then cov would be 0, then corr is zero.

How do I tackle the problem?

$f$ - is the polynomial of degree $101, f(n)=[1.01n] \forall n \in \{99, 100, \ldots, 200\}$. Find $f(0)$.

Posted: 21 Nov 2021 11:01 PM PST

Let $f$ be the polynomial of degree $101$, such that $f(n)=\lfloor1.01n\rfloor$ for all $n \in \{99, 100, \ldots, 200\}$. Find $f(0)$. I think, we should use Lagrange interpolation: $f(0) = \sum\limits_{i=99}^{200}\frac{[1.01i] \prod\limits_{j=99,i\neq j}^{200} (0 - j)}{\prod\limits_{j=99,i\neq j}^{200} (i - j)}$. Is it really possible to solve? Or is it a dead end?

Find a function for the distance between 2 vector equations

Posted: 21 Nov 2021 10:11 PM PST

I have 2 equations, $\bar u=(2, 4, 4)+t(4, 1, 5)$ and $\bar v=(1, -3, 2)+t(-2, 3, 1)$, and I want to express the distance between 2 points on $\bar u$ and $\bar v$ as a function of 2 dimensions, $\mathbb{R^2} \rightarrow \mathbb{R}$. I know the distance between 2 vector points to be $\sqrt{(u_2-v_1)^2+(u_2-v_2)^2+(u_3-v_3)^2}$ in 3 dimensions. But I have some trouble calculating the distance for these vector lines because there are infinite points on a line. So how should I proceed?

How does showing $\tau(i) = j \implies a\tau a^{-1}(a(i)) = a(j)$ demonstrate that $a\tau a^{-1}$ has the same cycle type as $\tau$?

Posted: 21 Nov 2021 10:35 PM PST

I am completely new to group theory. My class is beginning to learn about centralizers, which, if I am not mistaken, is the set of all elements $x$ in a group $G$ that commute with a specified element $a$ in $G$. To help us calculate the cardinality of the centralizer of an element in $S_5$, my teacher has introduced the orbit-stabilizer theorem $|O_x| \cdot |G_x| = |G|$ (I still do not understand what an orbit or stabilizer refers to). He states that the cardinality of the centralizer of an element in $S_5$ is $|G_x|$ (because I am still confused as to what a stabilizer is, I am not sure how this is true) and that $|O_x|$ is the "conjugacy class" (I do not remember the exact terminology my teacher employed), so we can calculate $|G_x|$ by finding $\frac{|G|}{|O_x|}$.

To lead us to finding $|O_x|$, my teacher has stated that we have to show an equivalence between the "conjugacy class" and the cycle types of $S_5$; to do so, we must show $\tau(i) = j \implies a\tau a^{-1}(a(i)) = a(j)$ for cycles $a, \tau$.

I understand that $a\tau a^{-1}(a(i)) = a\tau (a^{-1}a)(i) = a\tau(i) = a(j)$. However, how does showing this implication demonstrates that $a\tau a^{-1}$ has the same cycle type as $\tau$. Additionally, how does this relate to the supposed equivalence between the "conjugacy class" and cycle types of $S_5$?

Thanks in advance!

When probability of finding similar chips will be more than N%?

Posted: 21 Nov 2021 10:15 PM PST

In each game, there are N chips on the table. There are 400 different types of chips in total. What is the minimum number of chips (let's call it R) on the table for the probability of finding at least 2 identical chips to be higher than N%?

I think, that if R is equal to 401, than the probability of having two identical chips should be 1 or 100%, and if it's 201 it should be more than 0.5. But it'll be nice to have some formula to calculate R from N as well as confirmation that my thinking isn't wrong.

Calculating $\int_0^\infty \frac{e^{-x}-e^{-2x}}{x} dx$

Posted: 21 Nov 2021 10:38 PM PST

I have some trouble calculating a definite integral

$$\int_0^\infty \frac{e^{-x}-e^{-2x}}{x} dx.$$

I don't even know how to start. I have an hypotesis of using the limit

$$\lim _{x\to\infty} \int_x^{2x} e^{-t}\ln\left(\frac{t}{2}\right) dt = 0.$$

I would appreciate any help or hint.

Regular expression for a language string

Posted: 21 Nov 2021 10:50 PM PST

I'm trying to build a regular expression for this language: $$L=\{w\in\{0,1\}^*: \text{at least two} \ 0's \ \text{and at most two} \ 1's\}$$

So, it's mean that this language has $|w|_0 \geq2$ and $|w|_1 \leq 2$, this is what I have come up: $$100(0)^*+1100(0)^*+00(0)^*$$ Is this regular expression correct?

Two proper subsets of the real numbers, $A$ and $B$, that have the following conditions: $A$ and $B$ are closed, $A \cap B$ is empty, +1

Posted: 21 Nov 2021 10:40 PM PST

I'm trying to show that there can exists two proper subsets of the real numbers, $A$ and $B$, that have the following conditions:

(1) $A$ and $B$ are closed (with respect to the usual metric)

(2) $A \cap B$ is empty.

(3) inf$\{d(a,b): a \in A, b \in B \} = 0$

Obviously, for just the first condition $A = [0,1] = B$ would work.

Add in the second, and we can have A = $[0,1]$ and $B = [2,3]$

Now the third condition is where this problem gets me. I'm not sure how to apply $inf = 0$ for disjoint subsets in this context.

Can anyone think of an example for this?

Edit: To clarify, I'm trying to find an example that satisfies all three conditions

Ellipse in the triangle

Posted: 21 Nov 2021 10:11 PM PST

Find the equation of an ellipse if its center is S(2,1) and the edges of a triangle PQR are tangent lines to this ellipse. P(0,0), Q(5,0), R(0,4).
My attempt: Let take a point on the line PQ. For example (m,0). Then we have an equation of a tangent line for this point: $(a_{11}m+a_1)x+(a_{12}m+a_2)y+(a_1m+a)=0$, where $a_{11}$ etc are coefficients of our ellipse: $a_{11}x^2+2a_{12}xy+a_{22}y^2+2a_1x+2a_2y+a=0$. Now if PQ: y=0, then $(a_{11}m+a_1)=0$, $a_{12}m+a_2=1$, $a_1m+a=0$.I've tried this method for other 2 lines PR and RQ and I got 11 equations (including equations of a center)! Is there a better solution to this problem?

Theorem 3 on page 80 of kolmogorov and fomin's volume 1

Posted: 21 Nov 2021 10:14 PM PST

Hi All: I will write out exactly what is in the book in case people don't have the book. I understand the first part of the theorem but not the second part. What follows is almost word for word from Theorem 3 starting on page 80 of Kolomogorov and Fomin, volume 1.

Theorem 3: let $f(x) \neq 0 $ be a given functional. The subspace $L_f$ ( defined as a subspace where any point in it is such that $f(x) = 0$ ) has index equal to unity. i.e. an arbitrary element $y \in R$ can be represented in the following form:

$$y = \lambda x_{0} + x $$ where $x \in L_{f}$ and $x_{0} \not\in L_{f}$

Proof: Since $x_{0} \not \in L_{f}$, we have $f(x_{0}) \neq 0 $. If we set $\lambda = \frac{f(y)}{f(x_{0}}$ and $x = y - \frac{f(y)}{f(x_{0})}$, then $y = \lambda x_{0} + x$, where $f(x) = f(y) - \frac{f(y)}{f(x_{0})}f(x_{0}) = 0$.

If the element $x_{0}$ is fixed, then the element $y$ can be represented in the form $(5)$ uniquely. This is easily proved by proving the contrary. In fact, let $$ y = \lambda x_{0} + x $$ $$y = \lambda^{\prime} x_{0} + x^{\prime} $$ then $$ (\lambda - \lambda^{\prime})x_{0} = (x^{\prime} - x)$$

Now, if $(\lambda - \lambda^{\prime}) = 0$, then $(x^{\prime} - x) = 0$. On the other hand, if $(\lambda - \lambda^{\prime} \neq 0$, then $x_{0} = \frac{(x^{\prime} - x)}{(\lambda - \lambda^{\prime})} \in L_{f}$ which contradicts the condition that $x_{0} \not \in L_{f}$ Therefore, $(x^{\prime} - x) = 0$ which means that the representation for $y$ is unique.

Conversely, given a subspace $L$ of $R$ of index 1,$L$ defines a continuous linear functional $f$ which vanishes precisely on $L$. Indeed, let $x_{0} \not\in L$. Then, for any $x \in R$, $x = y + \lambda x_{0}$ with $y \in L$, $x_{0} \not\in L$. Let $f(x) = \lambda$. It is easily seen that $f$ satisfies the above requirements. If $f$ and $g$ are two such linear functionals defined by $L$, then $f(x) = \alpha g(x)$, for all $x \in R$, $\alpha$ a scalar. This follows because the index of $L$ in $R$ is 1.

I follow the theorem above up until the part that starts with "Conversely". I'm not even clear what is being proven in the converse nor the steps of the proof. If anyone could explain it, it's appreciated. They also, switched the places $x$ and $y$, compared to the first part of the proof, when representing a point in $R$ and I'm not sure why they did that. Also, I'm not clear on what requirements are being satisfied ? Thanks a lot for any enlightenment.

On a relation between volume of subsets of $\mathbb R^n$

Posted: 21 Nov 2021 10:31 PM PST

$\mathbf {The \ Problem \ is}:$ Let for all Lebesgue measurable subsets $A,B$ of $\mathbb R^n$ , ${v(A+B)}^r \geq {v(A)}^p + {v(B)}^q$ for some $0\leq p,q,r \leq 1$ where $v(X)$ is volume of $X$(which equals the Lebesgue measure of $X).$ Show $p+q=r.$

$\mathbf {My \ approach}:$I tried to use dilation(by $\delta >0$) of the sets $A,B,C.$ Then,${v(\delta(X))}={\delta}^n.v(X).$ And, $\delta(A+B)=\delta(A)+\delta(B).$

But, I couldn't approach further .

A hint is required, thanks in advance .

Xi are i.n.i.d bounded rv show a central limit exists

Posted: 21 Nov 2021 10:59 PM PST

We are asked to show if $X_i$ are independent but not i.d. and where $X_i$ are all bounded with $S_n = \sum_n X_i$ and $s^2_n = Var(S_n)$ with $s^2_n \to \inf$ then $\frac{S_n}{s_n}$ has a central limit thus converging in distribution to $N(0,1)$.

I'm trying this with either Lindeburg or Lyapunov with the following statement:

if $X_i$ are bounded then $|X_i| <= M$ for some M and thus $X^2$ is bounded too by $M^2$. The Lindeburg condition is:

$\lim_{n \to \inf}$ $\frac{1}{s^2_n}\sum_n$ $\int_{|X_i| \gt \epsilon*s_n} X^2_i dP$

Since the $X^2_i$ are bounded then they can be taken out of the integral and thus the sum becomes a sum of constants divided by inf $\to$ 0 and thus the Lindeburg condition holds which means a central limit exists.

Coequalizer of an idempotent and the identity implies the idempotent splits

Posted: 21 Nov 2021 10:25 PM PST

I am reading "Handbook of Categorical Algebra" by Francis Borceux. Proposition 6.5.4 is the equivalence of coequalizers, equalizers, and splittings.

Definition of splitting:

In a category $\mathcal{C}$, an idempotent $e:C\to C$ splits when there exists a retract $r,i:R\leftrightarrows C$ of $C$ such that $i\circ r=e$.

I am trying to understand the direction coequalizer implies splitting for an idempotent $e:C\to C$:

The coequalizer $\text{Coker}(e,1_C)$ exists $\implies$ $e$ splits as $e=i\circ r$ with $r,i:R\leftrightarrows C$ and $r\circ i=1_R$

In the proof, he lets $r=\text{Coker}(e,1_C)$. Then he says "the relation $e\circ e=e=e\circ1_C$ implies the existence of a unique $i$ such that $i\circ r=e$."

I do not understand how that relation implies the existence of such an $i$. I understand the rest of the proof, just not where $i$ comes from.

Tables of zeros of $\zeta(s)$

Posted: 21 Nov 2021 10:50 PM PST

Is there a table somewhere of the $n$th zero of $\zeta(s)$ for $n = 10^k$ for $k = 0,1,2,\ldots$? I need the values for $k$ up to as large as is known (e.g., $k = 22$). Same question for $n = 2^k$, or for other powers of a fixed integer. This is not found at http://www.dtc.umn.edu/~odlyzko/zeta_tables/, nor at https://www.lmfdb.org. Mathematica goes only to $10^8$. I need out to $10^{22}$.

Why do informal algebra techniques lead to incorrect solutions?

Posted: 21 Nov 2021 10:16 PM PST

I was recently solving a system of linear equations, 3 equations and 3 unknowns. I first solved via Row Reduction of the matrix and got a valid answer, but my friend attempted to solve the system using informal algebra methods and got the wrong answer. I know his answer is wrong, but I am struggling to explain what mathematical rule he broke.

Here is the system:

$z-x-y=0$

$z-2x=0$

$2x+y-3z=0$

Combining the first and third equation, one gets $x=-2y$. Plugging this back into equation one, one gets $z=-y$. Setting $x=1$, one gets the vector $<1,-1/2,1/2>$. This vector is valid for equations 1 and 3, but not for equation 2.

Now I know that this is not the proper technique for solving a system of three variables and that equation 2 was not used so how should one expect it to be satisfied. I know that this solution is wrong, but I am unsure how to explain what is wrong about it other than saying "that's not the way it's done." I personally made this mistake when first learning linear algebra and "that's not the way it's done" is all my teacher could say. If anyone has a better explanation for what exactly is wrong about this, I would greatly appreciate. Also, since equation 2 is not being used, it is 2 equations, 3 unknowns so there should be 2 free variables, not one (again I think this will occur if elimination is "done correctly").

Conditional expectation of pullback of sigma algebra

Posted: 21 Nov 2021 10:58 PM PST

Let $T: X \to X$ be a measure preserving invertible transformation and let $F$ be as sub sigma algebra. My book said it is clear that $E(\chi_{T^{-1}A} | T^{-1}F)(x)=E(\chi_A | F)(Tx)$ but I am not sure why, especially considering how the right hand side doesn't seem to be $T^{-1}F$ measurable.

Edit: The book doesn't mention it, but I think it is implicitly assuming that $T^{-1}F \subset F$. This is from an ergodic theory textbook, and this result is used to prove that the conditional entropy satisfies $H(T^{-1}A | T^{-1}F)=H(A|F)$

Can a factor of a number have the same modulus as that number

Posted: 21 Nov 2021 10:48 PM PST

If i have four numbers (n, m, x and r) where n and m are random integer numbers and $r = n \mod m$, is it possible that for any prime factor of n (denoted by x), $\frac{n}{x}\mod m = r$ ?

Expectation of inverse of sum of iid random variables

Posted: 21 Nov 2021 10:50 PM PST

Description:

Let $\gamma_{i,k}={1}/{(\Gamma_{i}^{k})^{a}}$ where $\Gamma_{i}^{k}=\sum_{t=1}^{k}I_{i}^{t}$, $a\in (1/2, 1]$, and $\mathbb{E}[I_{i}^{k}]=p_{i}>0$ for all $i$ and $k$. And $I_{i}^{k}$ is a indicator function as follows: \begin{align*} \begin{split} I_{i}^{k}= \left\{ \begin{array}{lr} 1, \; {\rm if \; the \; event \; happened;}\\ 0, \; {\rm if \; not.} \end{array} \right. \end{split} \end{align*}
Besides, the indicators $I_{i}^{k}, \left\lbrace i=1,\dots N \right\rbrace $ are iid across time steps.

Can we give the upper bound of the following?

\begin{align*} (a) \mathbb{E}\left[ \gamma_{i,k}^{2}\right] \leq ? \quad\quad (b) \mathbb{E}\left[ \left| \gamma_{i,k}-\frac{1}{k^{a}p_{i}^{a}} \right| \right] \leq ? \end{align*}

Finding conditions under which an inequality holds true

Posted: 21 Nov 2021 10:25 PM PST

I have faced with the following problem.

Assume $(x_1,x_2,\dots,x_m)$ are arbitrary positive real numbers and we have the ordered sequence $d_1\le d_2\le \cdots \le d_m$ of positive real numbers. define

$$A = (1+\frac{b_1}{a_1}x_1)^{d_1a_1}(1+\frac{b_1}{a_2}x_2)^{d_2a_2}\cdots(1+\frac{b_1}{a_m}x_m)^{d_ma_m}$$

and

$$B = (1+\frac{b}{a}\sum_{i=1}^{m}x_i)^{d_1a}(1+\frac{b}{a}\sum_{i=2}^{m}x_i)^{(d_2-d_1)a}\cdots (1+\frac{b}{a}\sum_{i=k}^{m}x_i)^{(d_k-d_{k-1})a}\cdots \\ (1+\frac{b}{a}x_m)^{(d_m-d_{m-1})a}$$

where $\forall i: 0 \le a_i \le C$ , $0 \le b_1 \le C$, $0 \le b \le C$ and $0 \le a \le C$ for a known value of parameter $C$ and there are constraints $b_1+\sum_{i=1}^{m}a_i \le C$ and $a+b \le C$. I am trying to find the conditions on $a_i$ which makes $B \ge A$.

Can I use the Karamata's inequality? If I can show the sequence $\left[(1+\sum_{i=1}^{m}x_i)^{d_1a}, (1+\sum_{i=2}^{m}x_i)^{(d_2-d_1)a},\cdots,(1+x_m)^{(d_m-d_{m-1})a}\right]$ majorizes $\left[(1+x_1)^{d_1a_1},(1+x_2)^{d_2a_2},\cdots,(1+x_m)^{d_ma_m}\right]$ then I can use convex function $f(x)=\ln(x)$ to arrive at desired result. Is it the correct course of action? Is there any other way, any other inequality?

Any suggestion would be appreciated.

Prove that the zeros of this sum of rational functions have less than a certain modulus

Posted: 21 Nov 2021 10:25 PM PST

From 2.1 of Sheil-Small:

Define $$R(z):=\sum_{k=1}^n \frac{c_k}{\left({z-z_k}\right)^m},$$ where $c_k>0$ and $|z_k|\leq 1$.

Prove that all finite zeros of $R(z)$ are in $\{|z|\leq \frac{1}{2^{1/m}-1}\}$.

Attempted solution

The case $m=1$ is not hard; you simply set $R(z)=0$, multiply on top and bottom by conjugates, then rearrange to see that any zero $w$ of $R$ must lie in the convex hull of the $z_k$.

The same approach does not appear to extend to greater $m$ because it's not clear how to get $w$ on one side; in particular, Jensen's Inequality doesn't apply because $z$ and $z_k$ need not be positive real.

I also considered

  • Rouché's Theorem, but it's not clear how to find suitable $f$ and $g$.
  • the argument principle, but $\frac{R'(z)}{R(z)}$ is too messy to work with.
  • reframing the problem in terms of convex hulls. Indeed, wlog we may assume that $\sum c_k = 1$, so that if $w$ is a zero of $R(z)$ then the origin lies in the convex hull of the $\frac{1}{w-z_k}$. I think that this is the way to go, esp. because the chapter is on the FTA and Gauss-Lucas, but I don't know how to proceed.

Mohr-Mascheroni with collapsing compass

Posted: 21 Nov 2021 10:20 PM PST

By famous Mohr-Mascheroni theorem

Every geometric construction that can be carried out by compass and straightedge can be done with the compass only (without a straightedge).

To say in short, to prove the theorem, we have to prove that the three following constructions can be done with only compass:

  1. Points of intersection of two circles given by center and one of the points for each circle
  2. Points of intersection of a circle (given by center and one of its points) and a straight line (given by two points).
  3. Point of intersection of two straight lines each of them given by two points.

I was reading "A short elementery proof of the Mohr-Mascheroni Theorem" by Norbert Hungerbuhler.

But it seems to me that the autor uses transport of the measure by the compass.

I suspect that we can avoid the usage of transport of measure by compass in the proof of Mohr-Mascheroni theorem. That is I do believe that every point constructible by collapsing compass and a straightedge can be constructed by means of collapsing compass only. But unfortunately I still find myself unable to do that.

P.S. It seems to me that despite the comments below, the construction in the Problem 4 of the book of Kostovskiy mentioned in the answer by @saltandpepper uses the measure tramsport as well [constructing the circles $(O,a)$, $(C, OE)$, $(D, OE)$].

If X & Y are two non-negative integer RVs, show that the E[XY] = {sum 1<n<inf $ 1<m<inf} [P(X>=n),(Y>=m)]

Posted: 21 Nov 2021 10:51 PM PST

I apologize for the missing latex commands but i'll try my best.

Show that $E[XY] = \sum\sum P(X \ge n, Y \ge m)$

The outer summation is for $n$ and the inner summation is for $m$ and the limits are as follows: \begin{gather*} 1\le n < \infty\\ 1\le m < \infty. \end{gather*}

The problem before this asked me to prove that the expectation for non-negative integer RVs can be written as a distribution form instead of the conventional $xP(X=x)$ over all $x$ form.

That is, $E[X]= \sum P(X \ge n)$ for $1\le n < \infty$

I did that successfully. However, I am slightly stuck here in proving joint expectation. In order for me to use my previous results, I have to somehow break up the joint PMF as a product of marginal PMFs which is only possible if the two RVs are independent. The problem does not state that the two RVs are independent. Any thoughts on how I could approach this?

So I was thinking this \hellip. I don't know how much of it makes sense with my broken $\LaTeX$ but i'll try it anyway :).

Over all n ($\sum$ *P(X>=n, Y>=m)) * over all m ($\sum$ *P(X>=n, Y>=m

the first summation would be the expected value of Y and the expected value of X using the marginals... does it make sense?

Thank you.

-SK

PS: yes, this is a homework assignment and I am not looking for the answer. Just a nudge in the right direction would be peachy! :)

No comments:

Post a Comment