Monday, April 25, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Example for an injective but not surjective polynomial $\mathbb{R}^2\rightarrow\mathbb{R}^2$?

Posted: 25 Apr 2022 02:51 AM PDT

The Ax-Grothendieck theorem claims that every injective polynomial function $\mathbb{C}^n\rightarrow\mathbb{C}^n$ is surjective. A proof using Hilbert's Nullstellensatz can by found in Terry Tao's blog here. Since the latter theorem holds for algebraically closed fields, I tried to look at what happens to the former theorem, if $\mathbb{Q}$ or $\mathbb{R}$ are taken instead. The first case is simple, $\mathbb{Q}\rightarrow\mathbb{Q},x\mapsto x^3$ is injective, but not surjective (It's quite funny, that you can use Fermat's last theorem to prove $2$ doesn't have a preimage.). The second case seemed as well, every injective polynomial $\mathbb{R}\rightarrow\mathbb{R}$ has to be surjective because of different limits for $x\rightarrow\pm\infty$ and the intermediate value theorem, so there is no counterexample, but I think there should be for $\mathbb{R}^2\rightarrow\mathbb{R}^2$. After hours, I still haven't found one though. I tried using the Binomial theorem, but examples like $P\colon(x,y)\mapsto(x^2-y^2,x-y)$ (which doesn't work as $P(1;1)=P(2;2)$) or $(x,y)\mapsto(x(x-y),y(x-y))$ (which doesn't work as $P(x,y)=P(-x,-y)$) just don't fit. Using Hilbert's Nullstellensatz and the sufficient conditions for being injective and not surjective unfortnutly isn't allowed when considering fields that aren't algebrically closed. I still have the feeling, that there is a simple and elegant counterexample I just don't see? None of the literature for the Ax-Grothendieck theorem I read featured one.

Is there an injective, but not surjective polynomial $\mathbb{R}^2\rightarrow\mathbb{R}^2$?

all functions $f:A \rightarrow Y$ can be extended to $F$ if $i$ is injective.

Posted: 25 Apr 2022 02:47 AM PDT

Given sets $A, X, Y$ and functions $i: A \rightarrow X$. We say that $f:A \rightarrow Y$ extends to $F:X \rightarrow Y$ if for all $a \in A$ $$F(i(a)) = f(a)$$

We want to show that all functions $f:A \rightarrow Y$ can be extended to $F$ if $i$ is injective.

$i$ being injective has the consequence as all $a \in A$ are mapped to distinct $X \in X$. So the image set of $i$ contains some distinct subset of $X$. I believe I have to break this up into three cases namely $f$ being surjective, injective. (I don't think I have to consider the bijective case since bijection is injective + surjective)

Case $1$: when $f$ is injective.
$\Rightarrow$ If $f$ is injective then it maps all $a \in A$ to distinct $y \in Y$ hence the $F$ exists.

Case $2$: When $f$ is surjective $$$$$\Rightarrow$When $f$ is surjective it has the property that for every $y \in Y$ there is a $a \in A$ such that $f(a) = y$ So if there exists $a_1 \neq a_2 \in A$ such that $f(a_1) = f(a_2) = y_0$ (because elsewise it's the same thing as the earlier case.) But since $a_1,a_2$ are mapped to $x \in X$ for some $x_1,x_2$ by $i$, $F$ can map them both to $y_0$ hence such a $F$ exists

I believe this isn't a good approach/argument (and also I think that my surjective case went wrong). Any other methods to prove this or corrections in my approach would be very helpful!

Double and triple integral of a sphere in polar coordinates

Posted: 25 Apr 2022 02:45 AM PDT

Consider the double integral I=∫∫ (√4-x^2-y^2) dA, where D={(X,Y): x^2 + y^2 ≤ 4} Question Here

A.) Use polar coordinates to evaluate the double integral I

B.) Give an interpretation of the integral I as the volume of a region R in (x, y, z)-space. Hence write I as a triple integral over R in cylindrical polar coordinates.

For A, I changed the integral to ∫∫ r(√4-r^2) drdθ, with the limits 2π and 0 (first integral) and 2 and 0 (second integral). This eventually led me to get ∫ 8/3 dθ (from 2π to 0), becoming 16/3π. I'm not completely sure about the integral limits I used.

For B, I rewrote the equation as x^2 + y^2 + z^2 = a^2 = 4 (a=2), and then solved normally using spherical polar coordinates. I got an answer of 32/3π. But since the original equation only included the positive area of the sphere, should the volume be 16/3π instead?

Laws of a tensor product of algebras

Posted: 25 Apr 2022 02:45 AM PDT

Given a commutative field $k$ and two $k$-algebras $A$ and $B$, the tensor product $A\otimes_k B$ is also an algebra. What are its laws ? I mean, the composition law is given by $(a\otimes b) \circ (c\otimes d) = (ac)\otimes (bd)$, so how do we define addition and multiplication ?

Thank you

Inverse of Grassmann variables

Posted: 25 Apr 2022 02:52 AM PDT

Let $\theta$ be a Grassmann variable. I know that $1/\theta$ is not defined but $\frac{1}{1-\theta}=1+\theta$. My question is simple: is $\frac{1}{\bar{\theta}\theta}$ defined and if so how? My guess is that since $\bar{\theta}\theta$ is a commuting number then the inverse should exist but I am not sure.

Proving $F(x,y)=\left(x,y,\int_x^yf(z^2)dz\right)$ is differentiable and computing $\nabla F(x,y)$

Posted: 25 Apr 2022 02:44 AM PDT

Let $f:\Bbb R\to\Bbb R$ be a function of the class $C^1$ and let $F:\Bbb R^2\to\Bbb R^3$ be given by $$F(x,y)=\left(x,y,\int_x^yf(z^2)dz\right).$$

Prove $F$ is differentiable and compute $\nabla F(x,y).$

My thoughts:

First, $F$ is differentiable if and only if all of its components are, so it remains to show $F_3(x,y)=\int_x^yf(z^2)dz$ is differentiable.

Also, $g=f(z^2)$ is of class $C^1$ as a composition of a polynomial and a function $f\in C^1(\Bbb R)$.

I think I should use or possibly generalize the following results:

$\underline{\boldsymbol{\text{theorem } 11.1:}}$

Let $f:A=[a,b]\times[c,d]\to\Bbb R$ be continuous s. t. $\frac{\partial f}{\partial y}$ exists and is continuous on $A.$ Suppose $F:[c,d]\to\Bbb R$ is given by $$F(y)=\int_a^bf(x,y)dx.$$ Then $F$ is differentiable on $[c,d]$ and $F'(y)=\int_a^b\frac{\partial f}{\partial y}(x,y)dx.$

And its corollary:

$\underline{\boldsymbol{\text{corollary }11.2:}}$

Let $f:A=[a,b]\times[c,d]\to\Bbb R$ is continuous s.t. $\frac{\partial f}{\partial y}$ exists and is continuous on $A.$ Let $u,v:[c,d]\to[a,b]$ be of class $C^1$ and suppose $F:[c,d]\to\Bbb R$ is given by $$F(y)=\int_{u(y)}^{v(y)}f(x,y)dx.$$ Then, $F$ is differentiable on $[c,d]$ and $$F'(y)=f(v(y),y)v'(y)-f(u(y),y)u'(y)+\int_{u(y)}^{v(y)}\frac{\partial f}{\partial y}(x,y)dx.$$

Both results have been proven and I got some additional insight into the corollary here.

We could write: $$F_3(x,y)=\int_{\pi_2(x,y)}^{\pi_2(x,y)}(f\circ(\pi_3^2))(x,y,z)dz$$ where $\pi_1,\pi_2:\Bbb R^2\to\Bbb R$ are projections on the first and second coordinate, respectively and $\pi_3:\Bbb R^3\to\Bbb R$ is a projection on the third coordinate. However, I couldn't generalize the formula above as $\frac{\partial (f\circ(\pi_3^2))}{\partial(x,y)}(x,y,z)$ is a vector, so I would need its components for the Jacobian.

I thought of fixing one of the variables $x$ or $y$ and see what I get in order to use the following:

$\underline{\boldsymbol{\text{theorem }12.22.:}}$

Suppose $A\subseteq\Bbb R^n$ is open and $f:A\to\Bbb R^m.$ If all partial derivatives $\frac{\partial f_i}{\partial x_j}$ exist and are continuous on $A,$ then $f$ is differentiable on $A.$

Or an even stronger result proven here, in case I managed to find partial derivatives, but there is only one differential, namely $dz$ under the integral, so I wasn't able to proceed. Hence my question:

How to proceed and can the fact that function under the integral doesn't depend on $x$ and $y,$ but only on $z$?

How do you calculate the separate and combined probabilities of finding gold coins in boats and planes?

Posted: 25 Apr 2022 02:41 AM PDT

Let's say you're searching for gold coins across the wreckage of boats and planes. You want to calculate the probability of finding gold coins within the boats alone, the planes alone, and then from the boats and planes combined.

You search:

  • 7 contemporary boats, each with a 6.98% chance of containing one gold coin; and
  • 13 historic boats, each with a 17.17% chance.

You also search:

  • 1 contemporary plane, with a 50% chance of containing one gold coin; and
  • 2 historic planes, each with a 63% chance.

I have worked through a possible solution, but the odds of finding gold coins within the boats and planes combined seem higher than I intuited. I welcome feedback:

From boats alone:

Probability of finding at least one gold coin is 94.8%:

$$P(none) = (1-.0698)^7 \times (1-.1717)^{13} = .0521$$

$$P(at\ least\ 1) = 1 - P(none) = .9479$$

Probability of finding at least two gold coins is 78.0%:

$$P(none\ from\ contemporary) = (1 - .0698)^7 = .6026$$

$$P(none\ from\ historic) = (1 - .1717)^{13} = .0864$$

$$P(1\ from\ contemporary) = 13 \times .1717 \times (1-.1717)^{12} = .2328$$

$$P(1\ from\ historic) = 7 \times .0698 \times (1-.0698)^6 = .3165$$

$$P(1\ from\ both) = P(1\ from\ contemporary) \times P(none\ from\ historic) + P(none\ from\ contemporary) \times P(1\ from\ historic)$$

$$P(1\ from\ both) = .3165 \times .0864 + .6026 \times .2328 = .1676$$

$$P(at\ least\ 2) = 1 - P(none) - P(1) = 1 - .0521 - .1676 = .7803$$

From planes alone:

Probability of finding at least one gold coin is 93.2%:

$$P(none) = (1-.50) \times (1-.63)^2 = .0685$$

$$P(at\ least\ 1) = 1 - P(none) = .9315$$

Probability of finding at least two gold coins is 63.0%:

$$P(1) = P(1\ from\ contemporary) \times P(none\ from\ historic) + P(none\ from\ contemporary) \times P(1\ from\ historic) = .3016$$

$$P(at\ least\ 2) = 1 - P(none) - P(1) = 1 - .0685 - .3016 = .6299$$

(Solve using the same logic as boats.)

From both boats and planes:

Probability of finding at least one gold coin is 99.6%:

$$P(at\ least\ 1) = 1 - P(none\ from\ boats) \times P(none\ from\ planes) = 1 - .0521 \times .0685 = .9964$$

Probability of finding at least two gold coins is 96.9%:

$$P(1) = P(1\ from\ boats) \times P(none\ from\ planes) + P(none\ from\ boats) \times P(1\ from\ planes) = .0272$$

$$P(at\ least\ 2) = 1 - P(none) - P(1) = 1 - .0036 - .0272 = .9692$$

(Solve using the same logic as boats.)

Stone–Čech compactification whose input space is more general than being completely regular, and whose output space just locally compact Hausdorff?

Posted: 25 Apr 2022 02:47 AM PDT

The Rudin version of Riesz-Markov representation theorem assume that $X$ is locally compact Hausdorff. On the other hand, the input of Stone–Čech compactification theorem is completely regular space $X$, while its output a compact Hausdorff space $\beta X$, i.e.,

Let $X$ be a completely regular space. Then there exists a compact Hausdorff space $\beta X$ and a map $T:X \to \beta X$ such that

  • $T$ is a homeomorphism from $X$ to $T(X)$.

  • $T(X)$ is dense in $\beta X$.

  • If $f: X \to Y$ is continuous with $Y$ being a compact Hausdorff space, then there is a unique continuous map $g: \beta X \to Y$ such that $f= g \circ T$.

Clearly, if a space is locally compact Hausdorff, then it is completely regular. But the Riesz-Markov theorem already applies to locally compact Hausdorff space. So we don't need Stone–Čech compactification theorem.

Is there any version of Stone–Čech compactification theorem such that

  • the input space is more general than complete regularity space, and
  • the output space is locally compact Hausdorff?

If such version of Stone–Čech compactification theorem exists, then we can combine it with Rudin's version of Riesz-Markov representation theorem.

Thank you so much for your help!

How to calculate the joint MGF?

Posted: 25 Apr 2022 02:38 AM PDT

The number of patients entering a ward is a Poisson distribution with mean 5. The probability of each admitted patient testing positive to an infection is 0.1. Compute the joint moment generating function for the total number of patients and the number of patients who tested positive.

Any hint would be very much appreciable.

Showing that triangles have same area

Posted: 25 Apr 2022 02:37 AM PDT

Let $abcd$ and $aefg$ be two squares in $ \mathbb{R}^2 $ with one common vertex- without any oberlapping otherwise.

How can I show that the triangles $ade$ and $agb$ have the same area without knowing any angles? I have no idea how to even start. Thank you for any help!

Show that $\ker(\phi\otimes\phi)\subseteq \ker\phi\otimes A+A\otimes\ker\phi$

Posted: 25 Apr 2022 02:34 AM PDT

Context: Let $\phi:A\to B$ be a morphism of Hopf algebras. To show that $\ker\phi$ is a Hopf ideal of $A$, we must verify that $\Delta(\ker\phi)\subseteq \ker\phi\otimes A+A\otimes\ker\phi$, where $\Delta:A\to A\otimes A$ is the co-multiplication.

I have showed that $\Delta(\ker\phi)\subseteq \ker(\phi\otimes\phi)$ using the compatibility conditions for morphism of Hopf algebras.

It turns out that showing $\ker(\phi\otimes\phi)\subseteq \ker\phi\otimes A+A\otimes\ker\phi$ is an easy exercise in linear algebra, but I can't really find a satisfactory proof.

My attempt: Let $(v_i)_{i\in I}$ be a basis of $\ker(\phi)$ and extend it to a basis $(v_i)_{i\in J}$ of $A$. Then any element $x$ in $A\otimes A$ is of the form $$ x=\sum_{i,j\in J}x_{ij}v_i\otimes v_j.$$

Then $x\in \ker(\phi\otimes\phi)\iff \sum_{i,j\in J}x_{ij}\phi(v_i)\otimes \phi(v_j)=0$. Now, I know for sure that $(\phi(v_i))_{i\in J}$ is a linear independent set in $B$.

QUESTION 1: Is the same true for $(\phi(v_i)\otimes \phi(v_j))_{i,j\in J}$ in $B\otimes B$? If so, this would imply $x_{ij}=0,\forall i,j$.

I believe the final reasoning is then as follows: by linearity of $\phi$ and properties of the tensor product, we can put some $x_{ij}$ with $v_i$, creating terms of the form $\phi(x_{ij}v_i)\otimes \phi(v_j)\in\ker\phi\otimes A$, but we could also put them with $v_j$, creating terms of the form $\phi(v_i)\otimes \phi(x_{ij}v_j)\in A\otimes\ker\phi$.

QUESTION 2: This kind of seems like a little bit of cheating, but is it ok?

Who wins in a game of card draw

Posted: 25 Apr 2022 02:28 AM PDT

Consider Ace to be \$1, 2 is \$2, all the way up to King being \$13.

I (Player A) shuffle a standard 52 card deck and draw 3 cards, then my friend (Player B) draws 3 cards.

The person with the highest dollar value wins, and the other person pays their total sum to the other player.

What is the expected winnings for player A and B?

I don't know how to approach this question, it feels like an overwhelming number of possibilities on a case by case basis. Is there a quicker way to calculate?

Idd random samples of pdfs

Posted: 25 Apr 2022 02:23 AM PDT

Suppose X1,... Xn are iid random samples with pdf fx (x|theta=theta exp(-theta (x)), where x>=0,theta>0. Show that n/sum n x=1X1 is a consistent estimator for theta

Joint MGF for Poisson

Posted: 25 Apr 2022 02:42 AM PDT

Suppose $X_i$ ~ Poisson$(λ_i ), i=1,2,3$. The $X_i$'s are independent.

Let $Y_1 = X_1 + X_2$ and $Y_2 = X_2 + X_3$. Find the joint MGF and the covariance using the properties of expectation.

Here I know the individual MGF's: $$m_{Y_1}(t_1)=e^{(\lambda_1+\lambda_2)(e^{(t_1-1)})}$$ $$m_{Y_2}(t_2)=e^{(\lambda_2+\lambda_3)(e^{(t_2-1)})}$$

I ended up multiplying these to get the joint MGF but then realized that was probably wrong since the $Y$'s both depend on $X_2$? How should I do this?

How to bound the phase error of the complex number

Posted: 25 Apr 2022 02:30 AM PDT

Given the complex number $e^{i\theta_k}$ and $r' e^{i\theta'}$ for $k=1,2,3,...,L$ and $\sum_{k=1}^{L}p_k=1$ ($1\geq p_k\geq 0$ for each k), where $r',\theta_k,\theta'\in \mathbb{R}$ . Each $\theta_k$ is very close to $\theta'$, and $r'$ is very close to 1.
If we have $$ |\sum_{k=1}^{L} p_k e^{i\theta_k}-r'e^{i\theta'}|\leq O(\epsilon), $$ where $\epsilon<<1$, can we deduce that $$ |\sum_{k=1}^{L} p_k \theta_k-\theta'|\leq O(\epsilon)? $$

Understanding Riesz-Markov representation theorem

Posted: 25 Apr 2022 02:36 AM PDT

Recently, I have come across Riesz representation theorem in this lecture note.


(Gaans) Let $X$ be a compact Hausdorff space and $E:=\mathcal C_b (X)$ the real vector space of all real-valued continuous bounded maps on $X$. Let $E'$ be the dual of $E$. Let $\varphi \in E'$ be positive and $\|\varphi\|_E = 1$. Then there exists a unique Borel probability measure $\mu$ on $X$ such that $$ \varphi(f) = \int_X f \mathrm d \mu \quad \forall f \in E. $$


(Rudin) Let

  • $X$ be a locally compact Hausdorff space,

  • $E:= \mathcal C_c (X)$ the complex vector space of all complex-valued continuous maps on $X$ with compact support, and

  • $\Lambda:E \to \mathbb C$ (not necessarily continuous) positive linear. Here $\Lambda$ is positive means $f \in E \text{ s.t. } f(X) \subset \mathbb R_{\ge 0}$ implies $\Lambda (f) \in \mathbb R_{\ge 0}$.

Then there exists a $\sigma$-algebra $\mathfrak{M}$ on $X$ which contains all Borel sets of $X$, and there exists a unique non-negative measure $\mu$ on $\mathfrak{M}$ such that

  1. $$\Lambda(f) = \int_X f \mathrm d \mu \quad \forall f \in E.$$

  2. $\mu(K) < \infty$ for every compact set $K \subseteq X$.

  3. $\mu(E) = \inf \{\mu(V) \mid E \subset V, V \text{ open}\}$ for all $E \in \mathfrak{M}$.

  4. $\mu(E) = \sup \{\mu(K) \mid K \subset E, K \text{ compact}\}$ for all open set $E \subseteq X$ and for all $E \in \mathfrak{M}$ with $\mu(E) <\infty$.

  5. If $E \in \mathfrak{M}$ such that $A \subset E$, and $\mu(E)=0$, then $A \in \mathfrak{M}$.


My understanding:

    1. means $\mu$ is finite on compact sets.
    1. means $\mu$ is outer regular on every $E \in \mathfrak{M}$.
    1. means $\mu$ is tight on every open sets and on every $E \in \mathfrak{M}$ with $\mu(E) <\infty$.
    1. means $\mu$ is complete.

Rudin's version is powerful in the sense that it applies to even unbounded linear maps.

We can recover Gaans's version as follows. The real vector space of all real-valued continuous bounded maps on $X$ can be seen as a complex vector subspace of all complex-valued continuous bounded maps on $X$. By Hahn-Banach theorem, we can extend $\varphi$ to the whole complex vector subspace of all complex-valued continuous bounded maps on $X$. Then we apply Rudin's version to get $\mu$ and then restrict $\mu$ to the Borel $\sigma$-algebra of $X$.

Could you confirm if my above understanding is correct?

Probability of rain on weekend

Posted: 25 Apr 2022 02:24 AM PDT

The probability of raining on Saturday is $30\%$ The probability of raining on Sunday is $40\%$

a) What is the probability of raining on weekend?

I got $58$%, is this right?

I did $P(\text{rain+rain})$ + $P(\text{rain}+\text{no rain})$ combinations etc

and added all the combinations with a rain on one of the days to get $0.58$

b) What is the assumption made? (That the raining are independent events)' c) If events are not independent, what is the maximum and minimum probability of rain on the weekend?

I don't know how to approach c), could you please help?

Kind regards

Alex

Find theta given point on circle

Posted: 25 Apr 2022 02:23 AM PDT

I am having trouble visualizing and understanding how you might obtain an angle given a point on a circle. I have a $(x, y)$ point where the values range between $0,1$ for both $x,y$. How would I calculate the angle theta?

My confusion comes from a piece of code which is taking a random point and calculating theta and then using this theta to produce a rotation matrix to rotate a given direction.

I have a disk which is divided into $N$ directions. In this instance we have divided into $8$. enter image description here

A single direction angle can be obtained by looping through the amount of directions and doing $ i * disk$ as shown in the code below. This will be the direction we would like to rotate. Below is implementation in GLSL

// Rotate direction  vec2 RotateDirectionAngle(vec2 direction, vec2 noise)  {      float theta = noise.y * (2.0 * PI);      float costheta = cos(theta);      float sintheta = sin(theta);      mat2 rotationMatrix = mat2(vec2(costheta, -sintheta), vec2(sintheta, costheta));        return rotationMatrix * direction;  }     int directions = 8;  disk = 2 * pi / directions    for(int i = 0; i < directions; i++)  {      float samplingDirectionAngle = i * disk;      vec2 rotatedDirection = RotateDirectionAngle(vec2(cos(samplingDirectionAngle), sin(samplingDirectionAngle)), noise.xy);        }  

Sorry if this question is super basic but I'm finding it hard to visualize the calculations. Would appreciate any insight to help me better understand

Problem $2.9.6$ (Perko's ODE): Show $|Y(t)| \le |Y(0)|\exp\left(\int_0^t \|A(s)\|\, ds\right)$ for $Y' = AY$

Posted: 25 Apr 2022 02:29 AM PDT

Let $A(t)$ be a continuous real-valued square matrix of size $n\times n$. Show that every solution of the nonautonomous linear system (where $Y(t) \in \mathbb R^n$ for all $t$ in the domain of $Y$) $$Y'(t) = A(t)Y(t)$$ satisfies $$|Y(t)| \le |Y(0)|\exp\left(\int_0^t \|A(s)\|\, ds\right) \tag{1}$$ Further, show that if $\int_0^\infty \|A(s)\|\, ds < \infty$, then every solution of this system has a finite limit as $t$ approaches infinity.


My work: Suppose $$|Y(t)| \le |Y(0)|\exp\left(\int_0^t \|A(s)\|\, ds\right)$$ is true; and take $t\to\infty$. Since we don't know if $\lim_{t\to\infty} Y(t)$ exists (yet?), it makes sense to work with $\liminf_{t\to\infty}$ and $\limsup_{t\to\infty}$ instead. So, we have $$\left|\liminf_{t\to\infty} Y(t)\right| \le |Y(0)|\exp\left(\int_0^\infty \|A(s)\|\, ds\right) \quad\text{and}\quad \left|\limsup_{t\to\infty} Y(t)\right| \le |Y(0)|\exp\left(\int_0^\infty \|A(s)\|\, ds\right)$$ If we can show that $\lim_{t\to\infty} Y(t)$ exists, then we are done (by any of the two inequalities above). How do we do that? Further, we also need to prove $(1)$.

Thanks!

How do I solve this circle geometry problem?

Posted: 25 Apr 2022 02:49 AM PDT

How do I solve this circle geometry problem? It is question number 17 at this link: https://www.cusd80.com/cms/lib6/AZ01001175/Centricity/Domain/1520/Geo%20Circle%20Review.pdf

enter image description here

Find the minimum value of $\frac{p}{q-r}$.

Posted: 25 Apr 2022 02:38 AM PDT

If the vertex of the parabola $$y=ax^2+bx+c \qquad (0<2a<-b)$$is not below the x-axis.

Let there be three points on the parabola: $A(-1,p), B(0,q)$ and $C(1,r)$.

Then what is the minimum value of $\dfrac{p}{q-r}$ ?

My thoughts: Since the axis of symmetry of the function is $x=-\dfrac{b}{2a}>1$ and $\Delta=b^2-4ac<0$, $\dfrac{p}{q-r}=-\dfrac{a-b+c}{a+b}$, I thought of solving it by the image of the function. But it seems difficult to find the answer only through the limited conditions.

Question about decomposition about splitting of symmetirc spaces of compact type

Posted: 25 Apr 2022 02:35 AM PDT

I get stuck in the following question:

Why does a locally symmetric space of compact type $M$ split locally irreducible components of dimension $\geq 2$ which are Einstein? In particular, why are all eigenvalues of Ricci curvature ${\rm Ricci}(M)$ strictly positive and why does each eigenvalue have a multiplicity of at least 2?

Could you please give me some help with the details? Thanks in advance. By the way, are there some nice references for these questions to refer to?

Does the Coxeter group $C(D_n)$ have any "proper reflection quotients" except $C(A_{n-1})$?

Posted: 25 Apr 2022 02:35 AM PDT

Here, a reflection quotient is a surjective homomorphism between Coxeter groups mapping reflections to reflections. A reflection quotient $C(\Delta) \to C(\Gamma)$ is proper if it is not injective and $\Gamma$ is not the single vertex graph.

An example of a proper reflection quotient is the homomorphism $C(D_n) \to C(A_{n-1})$ identifying two leaves of $D_n$. Are there any other reflection quotients $C(D_n) \to C(\Gamma)$? If yes, which ones?

Note that $C(A_{k-1}) \cong S_k$ does not have any proper reflection quotients for any $k$. This can be seen by noting that if $k \neq 4$ the only proper normal subgroup of $S_k$ is simple of index two, and if $k = 4$ the only other quotient is in fact $S_3$, but not in a reflection-preserving manner. Because $C(A_{n-1})$ is not only a reflection quotient but also a quite big reflection subgroup (i.e. a subgroup generated by reflections) of $C(D_n)$, it seems likely that the answer to my question is no.

Literature about the topic of what I called reflection quotients would be greatly appreciated, as I did not find any.

Number of ring endomorphisms of $\mathbb{Z}[\zeta_n]$

Posted: 25 Apr 2022 02:24 AM PDT

Let $n \in \mathbb{N}$ and $\zeta_n = e^{2i\pi/n} $ we define the following subring of $\mathbb{C}$ by,

$$\mathbb{Z}[\zeta_n] = \{ P(\zeta_n) : P \in \mathbb{Z}[x]\}$$

One can easily show that,

$$\mathbb{Z}[\zeta_n] = \{a_0+a_1 \zeta_n+\dots+a_{n-1} \zeta_n^{n-1} : a_i \in \mathbb{Z} \}$$

My question is about the number $N_n$ of ring endomorphisms of $\mathbb{Z}[\zeta_n]$.


My work so far :

If $f : \mathbb{Z}[\zeta_n] \rightarrow \mathbb{Z}[\zeta_n]$ is a ring morphism then as $\mathbb{Z}[\zeta_n]$ is a subring of $\mathbb{C}$ we have, $f(m) = m$ for all $m \in \mathbb{Z}$. From that we easily deduce that for any element of $\mathbb{Z}[\zeta_n]$ we must have,

$$f\left(\sum_{i=0}^{n-1} a_i(\zeta_n)^i \right)=\sum_{i=0}^{n-1} a_if(\zeta_n)^i$$

And more generally,

$$\forall P \in \mathbb{Z}[x],f(P(\zeta_n))= P(f(\zeta_n)) $$

Moreover, as we know that $f(1) = 1$ and $\zeta_n^n = 1$ then $f(\zeta_n)^n=f(\zeta_n^n) = f(1) = 1$. Hence, $f(\zeta_n)$ is a $n$th root of unity, so there exists $0\leq k \leq n-1$ such that $f(\zeta_n) = \zeta_n^k$.

Therefore $N_n \leq n$. If we only considered the number of group morphisms then the answer would be $n$, but here having a ring morphism is far more restrictive, and we can see on simple examples that it should not be the case.

Indeed, $\mathbb{Z}[\zeta_1] = \mathbb{Z}[\zeta_2] = \mathbb{Z}$, thus $N_1 =N_2 = 1$. Likewise, $\mathbb{Z}[\zeta_4] = \mathbb{Z}[i]$ where there are only two ring morphisms, so $N_4 = 2$.

From this observation, I deduced the following conjecture $$\forall n \in \mathbb{N}, N_n = \varphi(n) $$

where $\varphi$ denotes the Euler's totient function.

My hope would be to show that $f(\zeta_n) = \zeta_n^k$ has to be a generator of the cyclic group of the $n$th roots of unity, that is to say $k$ and $n$ are coprimes.

Equivalently, I want to prove that the map,

$$f : P(\zeta_n) \in \mathbb{Z}[\zeta_n] \mapsto P(\zeta_n^k) \in \mathbb{Z}[\zeta_n]$$

is only well-defined when $k \wedge n = 1$ (i.e. $f(P(\zeta_n))$ does not depend on the choice of $P$). From this point on, it will be easy to prove that $f$ is indeed a ring morphism.

If $P = a_0+a_1x+\dots+a_p x^p \in \mathbb{Z}[x]$ and $Q = b_0+b_1x+\dots+b_q x^q \in \mathbb{Z}[x]$ are such that $P(\zeta) = Q(\zeta)$ where $\zeta$ is any generator of the $n$th roots of unity, then we have,

$$\sum_{j=0}^{n-1} \left( \sum_{i \equiv j [n]} a_i \right)\zeta^j = \sum_{j=0}^{n-1} \left(\sum_{i \equiv j [n]} b_i \right) \zeta^j$$

But somehow I have difficulties going further, does this equality for one generator implies that all coefficients involved must be equal? Is $(1,\zeta,\dots,\zeta^{n-1})$ some sort of basis? What happens when $k$ and $n$ are not coprimes, are there any simple counter examples?


I possibly could have missed simpler arguments. What are you thoughts on this problem and on my reasoning? Any hint or suggestion will be welcomed.


Edit - current status : Thanks to Rob Arthan's answer, we deduce that $k$ being coprime with $n$ is a necessary condition for $f$ to be a ring homomorphism which leads me to,

$$N_n \leq \varphi(n) $$

I want to show that this number is reached by defining $\varphi(n)$ different ring homomorphisms.

Some observations:

  • As $\zeta_n^k$ is a primitive $n$th root of unity then $\Bbb{Z}[\zeta_n] = \Bbb{Z}[\zeta_n^k]$.

  • I realised that if,

$$\sum_{j=0}^{n-1} a_j \zeta_n^j = 0 $$

$\qquad$ then all $a_j$ are not necessarily zero, but this was to be expected.

As Z Wu pointed out, the cyclotomic polynomials come in handy for this problem that involves primitive roots of unity. I really like this approach, but I am not supposed to know anything about cyclotomic polynomials in my algebra course, so it led me to wonder that there must maybe exist a more elementary approach.

How do we solve the recurrence $T(n) = 2T(n/3) + n^{ \log_32 }\log\log n$?

Posted: 25 Apr 2022 02:52 AM PDT

How do we solve the recurrence $T(n) = 2T(n/3) + n^{ \log_32 }\log\log n$? Also, is it possible to solve this recurrence by the Master method?

What is the correct representation of the generalized gamma function?

Posted: 25 Apr 2022 02:44 AM PDT

The NIST Digital Library of Mathematical Functions defines the multivariate gamma function as $$ \Gamma_{m}\left(s_{1},\dots,s_{m}\right)=\int_{\boldsymbol{\Omega}}\mathrm{etr% }\left(-\mathbf{X}\right)\left|\mathbf{X}\right|^{s_{m}-\frac{1}{2}(m+1)}\prod% _{j=1}^{m-1}|(\mathbf{X})_{j}|^{s_{j}-s_{j+1}}\mathrm{d}{\mathbf{X}}, $$ see here for all the definitions of the variables. Another representation is given by $$ \Gamma_{m}\left(s_{1},\dots,s_{m}\right)=\pi^{m(m-1)/4}\prod_{j=1}^{m}\Gamma% \left(s_{j}-\tfrac{1}{2}(j-1)\right), $$ see here.

The book Analysis on Symmetric Cones by Faraut and Korányi (1994) has in my eyes the same definition, but in their Theorem VII.1.1. the representation suddenly has a 2 in it, which I marked in red below. enter image description here Note that to translate to the representation of NIST Digital Library of Mathematical Functions we have $n=m(m+1)/2$, $r=m$, $d=1$. Also, see below for the definitions of all variables according to Faraut and Korányi.

Where does the 2 in Faraut and Korányi come from?

enter image description here

How can I prove that a multivariable real vector valued function is continuous iff its component functions are continuous?

Posted: 25 Apr 2022 02:37 AM PDT

Suppose we have a multivariable, real, vector-valued function $f:\mathbb{R}^n \to \mathbb{R}^m$ where $f = (f_1, f_2, ..., f_m)$. How can I prove that $f$ is continuous iff all of its component functions $f_i$ are continuous? This is where where $f_i$ is a multivariable, real function $f_i:\mathbb{R}^n \to \mathbb{R}$

FORWARDS:

suppose that $f$ is continuous and $a = (a_1, a_2, ..., a_n) \in \mathbb{R}^n$

$\implies \forall a, \forall \epsilon > 0, \exists \delta > 0$ s.t. $d(a, x) < \delta \implies d(f(a), f(x)) < \epsilon$

$\implies ||f(x) - f(a)|| < \epsilon$

$\implies \sqrt{\sum_{i = 1}^{m}(f_i(x) - f_i(a))^2} < \epsilon$

$\implies \sum_{i = 1}^{m}(f_i(x) - f_i(a))^2 < \epsilon^2$

$\implies (f_i(x) - f_i(a))^2\ < \epsilon^2$

$\implies |f_i(x) - f_i(a)| < \epsilon$

BACKWARDS:

suppose that each $f_i$ is continuous and $a = (a_1, a_2, ..., a_n) \in \mathbb{R}^n$

$\implies \forall a, \forall \epsilon > 0, \exists \delta_i > 0$ s.t. $d(a, x) < \delta_i \implies d(f_i(a), f_i(x)) < \epsilon$

if we let $\delta = min\{\delta_1, \delta_2, ..., \delta_n\}$ , we can say that...

$\implies \forall a, \forall \epsilon > 0, \exists \delta > 0$ s.t. $d(a, x) < \delta \implies d(f_i(a), f_i(x)) < \epsilon$

Now, because the above statement holds true $\forall \epsilon$, we can say that...

$\implies \forall a, \forall \epsilon > 0,\exists \delta > 0$ s.t. $d(a, x) < \delta \implies d(f_i(a), f_i(x)) < \frac{\epsilon}{\sqrt{m}}$

$\implies |f_i(x) - f_i(a)| < \frac{\epsilon}{\sqrt{m}}$

$\implies (f_i(x) - f_i(a))^2 < (\frac{\epsilon}{\sqrt{m}})^2$

$\implies \sum_{i = 1}^{m}(f_i(x) - f_i(a))^2\ < m(\frac{\epsilon}{\sqrt{m}})^2 = \epsilon^2$

$\implies \sqrt{\sum_{i = 1}^{m}(f_i(x) - f_i(a))^2} < \epsilon$

$\implies ||f(x) - f(a)|| < \epsilon$

Also, does this proof only hold true when d is the stadard euclidean distance on $\mathbb{R}^n$ and $\mathbb{R}$. Why would other norms or metrics suffice? I've seen a few other posts online about this proof. However, I'm still unsure about a few things.

Integral formulas involving continued fractions

Posted: 25 Apr 2022 02:27 AM PDT

Ramanujan posed the following formulas as questions in the Journal of Indian Mathematical Society:

$$\int_{0}^{\infty}\dfrac{\sin nx\,\,dx}{{\displaystyle x + \dfrac{1}{x +}\dfrac{2}{x +}\dfrac{3}{x +}\dfrac{4}{x + \cdots}}} = \dfrac{\sqrt{\dfrac{\pi}{2}}}{n + \dfrac{1}{n +}\dfrac{2}{n +}\dfrac{3}{n +}\dfrac{4}{n + \cdots}}\tag{1}$$ $$\int_{0}^{\infty}\dfrac{\sin\left(\dfrac{\pi nx}{2}\right)\,\,dx}{{\displaystyle x + \dfrac{1^{2}}{x +}\dfrac{2^{2}}{x +}\dfrac{3^{2}}{x +}\dfrac{4^{2}}{x + \cdots}}} = \dfrac{1}{n +}\dfrac{1^{2}}{n +}\dfrac{2^{2}}{n +}\dfrac{3^{2}}{n + \cdots}\tag{2}$$

From this post we know that $$\phi(x) = e^{x^{2}/2}\int_{x}^{\infty}e^{-t^{2}/2}\,dt = \frac{1}{x +}\frac{1}{x +}\frac{2}{x +}\frac{3}{x + \cdots}\tag{3}$$ and hence the first integral formula reduces to $$B = \int_{0}^{\infty}\phi(x)\sin nx\,\,dx = \sqrt{\frac{\pi}{2}}\phi(n) = \phi(n)\int_{0}^{\infty}e^{-t^{2}/2}\,dt\tag{4}$$ Considering the integral $$A = \int_{0}^{\infty}\phi(x)\cos nx\,\,dx$$ we can see that \begin{align} A + iB &= \int_{0}^{\infty}e^{x^{2}/2 + inx}\int_{x}^{\infty}e^{-t^{2}/2}\,dt\,dx\notag\\ &= e^{n^{2}/2}\int_{0}^{\infty}e^{(x + in)^{2}/2}\int_{x}^{\infty}e^{-t^{2}/2}\,dt\,dx\notag \end{align}

I am not so much used to theory of complex integration, but I think it should be possible to interchange the limits of integration above and get the values of $A$ and $B$ without going too much into the theory of complex integration. But still it does not seem rigorous to me. Please let me know if I am on the right track (or may be this approach can be made rigorous) or suggest alternative approach.

For integral $(2)$ I have no idea of the continued fraction used there. Any clues to the solution of $(2)$ would also be greatly helpful.

Take seven courses out of 20 with requirement

Posted: 25 Apr 2022 02:45 AM PDT

To fulfill the requirements for a certain degree, a student can choose to take any 7 out of a list of 20 courses, with the constraint that at least 1 of 7 courses must be a statistics course. Suppose that 5 of the 20 courses are statistics courses.

From Introduction to Probability, Blitzstein, Hwang

Why is ${5 \choose 1}{19 \choose 6}$ not the correct answer?

Integral of $|f|$ outside a compact set

Posted: 25 Apr 2022 02:37 AM PDT

Let $G$ be a locally compact group. Given $f\in L^1(G)$ and $\epsilon>0$, how to show that there is a compact set $K\subset G$ such that $\int_{G\setminus K}|f|<\epsilon$?

No comments:

Post a Comment