Monday, May 31, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Reference request: sheaf cohomology after Bott & Tu

Posted: 31 May 2021 08:09 PM PDT

undergrad here. I want to understand sheaf cohomology. (Not sure if this is relevant, but I also want to understand Étale cohomology.)

To do this, I've been studying de Rham cohomology from Bott and Tu, to lead into Cech cohomology, which I think is one potential route to general sheaf cohomology(?) It looks like B&T covers this briefly. I'll be trying to hit the following sections:

  • Section 10, presheaves and Cech cohomology
  • Section 13, monodromy

But past that, it looks like I'll need to find other sources for studying sheaf cohomology. So, my questions:

  1. For my purposes, are there other sections (past section 6) of B&T I ought to read?
  2. What is a good 'next read' for understanding sheaf cohomology?

Finding Taylor's series of a function

Posted: 31 May 2021 08:04 PM PDT

Show that $\frac{e^{a \sin^{-1}(x)}}{\sqrt(1-x^2)}=1+\frac{ax}{1!}+\frac{(a^2+1^2)x^2}{2!}+\frac{a(a^2+2^2)x^3}{3!}+\frac{(a^2+1)^2(a^2+3^2)x^4}{4!}+\cdots$ My attempt: I integrated the function and got $\frac{e^{a sin^{-1}(x)}}{a}$ then I wrote the series of $e^{a sin^{-1}(x)}$ but it contained terms like $(sin^{-1} (x))^2$ ,$(sin^{-1}(x))^3$ and so on so I could not find the series. My idea was to find the series of the anti derivative of the function and then to derivate the obtained series. Any other way to do it?

Question about left Hopf-modules

Posted: 31 May 2021 08:09 PM PDT

I'm reading Radford's text on Hopf Algebras right now and I'm a bit confused about the definition of a left A-Hopf algebra. The definition given in the book is:

Let $A$ be a $\Bbbk$-bialgebra. A $\underline{\text{left $A$-Hopf module}}$ is a triple ($M$, $\mu$, $\rho$) where ($M, \mu$) is a left $A$-module, ($M, \rho$) is a left $A$-comodule, and $$ \rho(a m) = a_{(1)}m_{(-1)} \otimes a_{(2)}m_{(0)}.$$

The book uses the convention of Sumless Sweedler notation that

  1. $\Delta (a) = a_{(1)} \otimes a_{(2)}$
  2. $\rho (m) = m_{(-1)} \otimes m_{(0)}$

I'm just looking to gain some intuition about what the axiom for the $\rho$ map in the definition is trying to convey. It's not clear to me what the meaning of this axiom is.

Convergence in distribution implies convergence of CDF

Posted: 31 May 2021 08:01 PM PDT

We know that If $Xn$ is a seqeunce of RVs with PDFs $fn$, such that $fn(x)$ converge pointwise to $f(x)$ for almost all $x$, then $Xn$ converges in distribution to $X$

My question is when do we have the converse implication. I think I have read somewhere that if the CDF of $X$ is continuous we do get the converse implication.

What are the other cases ? And do you have an example where this is false ?

Word for a transformation without an origin

Posted: 31 May 2021 07:47 PM PDT

Is there a precise term for a transformation which does not have an origin? For example, a translation of $\mathbb{R}^2$, or a deformation which is equally applied to each tile in a tiled plane, so that it looks the same everywhere? A rotation or shear would not count, since geometry further away from the origin gets deformed more than geometry near the origin.

What each element in a $2\times 2$ matrix represent

Posted: 31 May 2021 08:08 PM PDT

I know in a vector, $\left(\begin{array}{c}x\\ y\end{array}\right)$, what the $x$ and $y$ represent.

But in a $2\times 2$ matrix, what does each element represent?

Carmichael number n such that $\frac{\gcd(n-1, \phi(n))^2}{\lambda(n)^2} \geq n-1$.

Posted: 31 May 2021 07:33 PM PDT

Such that $\phi(n)$ is a phi Euler function, $\lambda(n)$ is a Carmichael lambda function. With $\frac{\gcd(n-1, \phi(n))^3}{\lambda(n)^3} \geq n-1$, i can find some numbers n such as: $1729, 19683001, 631071001, 4579461601, 8494657921$ and so on, but with $\frac{\gcd(n-1, \phi(n))^2}{\lambda(n)^2} \geq n-1$, how can i find a number n??

Why is the graph of $sec^2(x)/tan^2(x)$ continuous at $x=\pi /2$?

Posted: 31 May 2021 07:56 PM PDT

$f(x)=\frac{sec^2(x)}{tan^2(x)}$

Domain of $sec^2(x)$ and $tan^2(x)$ is $\mathbb{R}-(2n+1)\frac{\pi}{2}$, for $n \in \mathbb{Z}$, hence $f(x)$ also has the same domain. I expected a discontinuity at at $x=\frac{\pi}{2}$.

But, the graph does not say so.

Graph from Wolfram Alpha: enter image description here

What am I missing in my concept?

Also, even if the software did the manipulation $\frac{sec^2(x)}{tan^2(x)}$=$\frac{1}{sin^2(x)}$ it should have been valid only for $x \in \mathbb{R} -(2n+1)\frac{\pi}{2}$.

Determining linear transformation

Posted: 31 May 2021 07:42 PM PDT

Let V be a vector space, and T:V→V a linear transformation such that:

T(2v1 + 3v2) = −5v1 − 4v2 and T(3v1 + 5v2) = 3v1 − 2v2

Then:

T(v1)= ? v1+ ? v2

T(v2)= ? v1+ ? v2

T(4v1+2v2)= ? v1+ ? v2

I cannot solve this problem and have been at it for hours. I found a similar question here: Finding the basis of a vector space. I tried applying the same operations, but do not understand how they got to the final solution.

prove the bilinear for $a(u,v) = \int u'v'$ is not coercive in $H^1$

Posted: 31 May 2021 07:24 PM PDT

Prove the following two facts:

  • $a(u,v) = \int u'v'$ as bilinear form over $H^1(\Bbb{R})$ is not coercive on $H^1(\Bbb{R})$
  • the differential equation $$-u'' = f\\u(x)\to 0\text{ when }\ |x|\to \infty$$ needs not to have solution even for $f\in C^\infty(\Bbb{R})\cap L^2(\Bbb{R})$

My attempt,for the first one if we can prove Poincare inequality does not holds for the first case then we know it's not coercive.

That is to show $\|u\|_2\le C\|u'\|_2$ does not holds then we are done,we construct a sequence in $H^1$ as follows:

$$u_n(x) = x\ \text{ on }0\le x\le n\\u_n(x) = -x+2n\ \text{ on }n\le x\le 2n$$

Then $u_n\in H^1(\Bbb{R})$,but $\|u_n\|_2\sim n^{\frac{3}{2}}$ has higher order than $\|u_n'\|_2 \sim \sqrt{n}$ hence Poincare inequality does not holds.Is my idea correct?

Matrix for a linear transformation on finite-dimensional vector space

Posted: 31 May 2021 07:23 PM PDT

I find that the matrix of a linear transformation is defined in SEC. 37. Matrices of the 2nd ed. of Finite-Dimensional Vector Spaces by Paul R. Halmos as follows:

Let $\mathcal V$ be an $n$-dimensional vector space, let $\mathbb X = \{x_1, \cdots, x_n\}$ be any basis in $\mathcal V$, and let $A$ be a linear transformation (operator) on $\mathcal V$. Since every vector (in $\mathcal V$) is a linear combination of the $x_i$, we have in particular $Ax_j = \sum_i \alpha_{ij}x_i$ for $j = 1, \cdots, n$. The set $(\alpha_{ij})$ of $n^2$ scalars, indexed with the double subscript $i, j$, is the matrix of $A$ in the coordinate system $\mathbb X$.

I also find a (different?) definition of the matrix in Lecture 2 of Autumn Quarter 2007–08 of Stanford University's EE263 - Linear Dynamical Systems by Stephen Boyd as follows:

Interpretation of $a_{ij}$:

$y_i = \sum_{j=1}^n a_{ij}x_j$

$a_{ij}$ is the gain factor from $j$th "input" ($x_j$) to the $i$th "output" ($y_i$). Thus, e.g., $i$th row of (matrix) $A$ concerns $i$th output, $j$th column of (matrix) $A$ concerns $j$th input.

In my understanding, the above two definitions of a matrix are not the same. Would appreciate if someone pointed out if I am wrong. Thanks.

Surjective operator to separable space

Posted: 31 May 2021 07:34 PM PDT

Let $X,Y$ be Banach spaces and $T:X\to Y$ be a surjective bounded linear operator. If $Y$ is separable then there exists a separable subspace $Z$ of $X$ such that $T(Z)=Y$. I tried the following:
$Y = \overline{\{y_1, \dots, y_n, \dots\}}$. Since $T$ is surjective $y_i = Tx_i$ for some $x_i \in X$. Let $Z = \overline{\text{span}(x_1,\dots,x_n,\dots)}$. I know that $Z$ is separable so if I could prove that $T(Z)=Y$ the proof would be complete. However I could only prove that $\overline{T(Z)}=Y$. Any help would be appreciated. Thanks.

Conditions on Monotone Convergence Theorem

Posted: 31 May 2021 08:05 PM PDT

In Shiryaev's probability, the formulation of the monotone convergence theorem is as such:

Let $ \eta, \xi, \xi_1, x_2, \ldots$ be random variables. If $\xi_n \geq \eta$ for all $n\geq 1$ and $\mathbb{E}\eta > -\infty$ and $\xi_n \uparrow \xi$, then $\mathbb{E}\xi_n \uparrow\mathbb{E}\xi_n$.

My question is: why is $\mathbb{E}\eta > -\infty$ necessary? Is there a situation where I can construct a sequence as in the statement of the theorem with a random variable where $\mathbb{E}\eta = -\infty$ that makes the MCT fail?

Primitive proof of the integral of sine and cosine?

Posted: 31 May 2021 08:00 PM PDT

Does anyone know of any "primitive" proofs of the integrals of sine and cosine, i.e. ones not making use of the fundamental theorem of calculus?

To check continuity of $f(z) = \begin{cases} {\frac{{\bar{z}}^3}{z Re z}}, & \text if {z}\neq {0} \\ 0, & \text{if z=0} \end{cases}$

Posted: 31 May 2021 08:09 PM PDT

I want to check continuity of $f(z) = \begin{cases} {\frac{{\bar{z}}^3}{z Re z}}, & \text{if} {z}\neq {0} \\ 0, & \text{if z=0} \end{cases}$

Here polynomial function ${\frac{{\bar{z}}^3}{z Re z}}$ is defined and continuous on everywhere except on z=0 and on Y-axis.

To check continuity at z=0 ,I considered $\lim_{z\to 0}{\frac{{\bar{z}}^3}{z Re z}}$ and I evaluated this limit along all possible paths I got answer zero .Infact I want to show that $\lim_{z\to 0}{\frac{{\bar{z}}^3}{z Re z}}=0$ that is continuous at 0. How can I proceed.

Lagrange multiplicator/unit sphere

Posted: 31 May 2021 07:37 PM PDT

let $\vec{v} = (2,1,-2)^{T} \in \mathbb{R}^3$ and $S^2$ the unit sphere hence $\mathbb{S} = \{ (x,y,z)^{T} \in \mathbb{R}^3 | x^2+y^2+z^2=1 \} $

Define the shortest distance from $\vec{v}$ to $\mathbb{S}$

The function I set up was $$F(x,y,z, \lambda) = \sqrt{(2-x)^2+(1-y)^2+(-2-z)^2}+\lambda(x^2+y^2+z^2-1)$$

but I fear that this is not the way how I can calculate the distance, could someone help me? Thanks in advance

Suppose that $\sum_{n=1}^\infty a_n=A$ and $\sum_{n=1}^\infty b_n=B$, {$c_n$}: $c_{2n-1}=a_n$ and $c_{2n} = b_n$. Then $\sum_{n=1}^\infty c_n=A+B$

Posted: 31 May 2021 07:59 PM PDT

Suppose that $\sum_{n=1}^\infty a_n=A$ and $\sum_{n=1}^\infty b_n=B$, also, define {$c_n$} as: $c_{2n-1}=a_n$ and $c_{2n} = b_n$ for $1 \leq n$. Prove that $$\sum_{n=1}^\infty c_n=A+B$$

Can I get help on how to rigurously prove this?

Assume that $S_n$ and $T_n$ are the partial sums of $\sum_{n=1}^\infty a_n$ and $\sum_{n=1}^\infty b_n$, respectively, we know that $S_n \longrightarrow A$ and $T_n \longrightarrow B$, so, how can I prove that if $U_n$ is the partial sum of $\sum_{n=1}^\infty c_n$, then, $U_n \longrightarrow A+B$?

Normal derivative of a partial derivative

Posted: 31 May 2021 07:48 PM PDT

I am reading lecture some lecture notes where the professor defines a function as such:

$$\frac{d}{dc}\bigg[\frac{\partial W(c',c)}{\partial c'}\bigg|_{c'=c}\bigg]_{c=c^*} < 0$$

This I can make sense of as: the slope of the function $\frac{\partial W(c',c)}{\partial c'}\bigg|_{c'=c}$ being less than 0 at $c=c^*$

The professor goes on to say that by "carefully differentiating" we can rewrite this as:

$$ \bigg[\frac{\partial^2W(c',c)}{\partial c'^2} + \frac{\partial^2 W(c',c)}{\partial c' \partial c}\bigg]\Bigg|_{c'=c=c^*} < 0 $$

This I do not understand. My initial thought would be that by taking the derivative of $\frac{\partial W(c',c)}{\partial c'}\bigg|_{c'=c}$ with respect to $c$ only should simply yield: $\frac{\partial^2 W(c',c)}{\partial c' \partial c}\bigg|_{c'=c=c^*}$. Where does this extra term of $\frac{\partial^2W(c',c)}{\partial c'^2}$ come from? Can someone point out where I have misunderstood?

Find $P(X>2,Y<4)$ $f(x, y)=\left\{\begin{array}{ll} e^{-y} & \text { if } 0<x<y<\infty \\ 0 & \text { otherwise } \end{array}\right\} $

Posted: 31 May 2021 08:01 PM PDT

Let (X, Y) be a random vector with joint density function given by

$f(x, y)=\left\{\begin{array}{ll} e^{-y} & \text { if } 0<x<y<\infty \\ 0 & \text { otherwise } \end{array}\right\} $

Find $P(X>2,Y<4)$

I think its

$\int _2^4\int _x^4e^{-y}\:dydx$ but im not sure

what does $1/n$ times the expected number of edges per vertex in a finite poset on $n$ points approach as $n$ goes to infinity?

Posted: 31 May 2021 07:22 PM PDT

Let $S_n$ be a maximal set of inequivalent posets on $n$ points (i.e., one with maximum possible cardinality). Let $E_n$ be the total number of edges in $S_n.$ Clearly $|S_n|$ and $|E_n|$ depend only on $n.$ What is $$\lim_{n\rightarrow\infty}\frac{|E_n|}{n^2|S_n|}?$$ For $n=2,\dots,9$ the values to three decimals are: .625, .511, .453, .418, .392, .371, .353, .338.

The same question could also be asked for other types of graphs. I suppose it is intractable in general but the numbers above are somewhat interesting.

Analytical solution of a recurrence relation

Posted: 31 May 2021 08:09 PM PDT

A recurrence relation defined as below where $c$ is a constant:

$$ f(n) = \begin{cases} c, & \text{if $n=1$}\\\\ f(n-1)\left(\frac{c}{n}\right), & \text{if $n>1$ and $n \in \mathbb{Z}^+$} \end{cases} $$

Further expanding the function we get: $$ f(n) = \frac{c^n}{n!}\\\\ \delta(n) = f(n)-f(n-1) = \frac{c^n}{n!}\left(1-\frac{n}{c}\right),~n>1 ~\text{and}~ n \in \mathbb{Z}^+ $$

Ideally $\delta(n)\rightarrow 0 \text{ when $n\rightarrow\infty$}$. However, for computational purpose I am trying to solve $n$ when $\delta(n) = \varepsilon$ where $\varepsilon$ is a very small number. Offcourse $n$ can be found by calculating $\delta(n)$ for $n=2, 3, 4, \cdots$ until the value of $\delta(n)$ reaches $\varepsilon$.
In order to avoid all that, I am looking for an analytical solution to to solve $\delta(n) = \varepsilon$.

Prove a matrix $A Q^{-1}A^T$ is invertible

Posted: 31 May 2021 08:06 PM PDT

Let $A$ be $m \times n$ matrix with linearly independent rows and let $Q$ be a square matrix of type $n \times n$ and positive definite. Assume that $Q$ is invertible. Show that $A Q^{-1} A^T$ is invertible.

I know that if the rows of $A$ are linearly independent so that $AA^T$, which is $m \times m$ matrix, is invertible. Proof can be found in Prove

But in this case, I don't have any idea to begin.

Let $f : \mathbb{R} \to \mathbb{R}$ be measurable and let $Z = {\{x : f'(x)=0}\}$. Prove that $λ(f(Z)) = 0$.

Posted: 31 May 2021 07:53 PM PDT

The following is an exercise from Bruckner's Real Analysis:

Let $f : \mathbb{R} \to \mathbb{R}$ be measurable and let $Z = {\{x : f'(x)=0}\}$. Prove that $λ(f(Z)) = 0$.

For the case $f$ being nondecreasing / nonincreasing, we can defined $g=f^{-1}$ and then $g'$ and then use the following theorem from the book :

Let $f$ be nondecreasing / nonincreasing / of bounded variation on $[a,b]$. Then $f$ has a finite derivative almost everywhere.

Is it possible to "reduce" evaluation of any measurable $f$ to a nondecreasing one or otherwise how the claim can be proved for any measurable $f$?

Accessibility in control theory: are these 2 different definitions equivalent?

Posted: 31 May 2021 07:51 PM PDT

So I have been studying control theory, and I have found different authors use different definitions for the concept of accessibility. In particular, I find that with one definition it is trivial to prove that accessibility is a necessary condition for controllability, and with the other one...not so much.

Let me introduce some concepts first. We'll be dealing with a differential equation on a smooth $n$-manifold $M$ of the form \begin{equation}\label{controlsys} \dot{x} = f(x,u(t)), \end{equation} where $u(t)$ is a time-dependent map from the nonnegative reals $\mathbb{R}^+$ to a constraint set $\Omega \subset \mathbb{R}^m$; $u$ is the control.

Define the reachable sets: Given $x_0 \in M$, we define $R(x_0,t)$ to be the set of all $x \in M$ for which there exists an admissible control $u$ such that there is a trajectory of the control system with $x(0)=x_0, x(t) = x$. The reachable set from $x_0$ at time $T$ is defined to be \begin{equation} R_T(x_0) = \bigcup_{0 \leq t \leq T} R(x_0,t). \end{equation}

Now here come the two definitions of accessibility I have found.

  1. The control system on $M$ is said to be accessible from $p \in M$ if for every $T>0$, the reachable set $R_T(p)$ contains a nonempty open set.
  2. The control system on $M$ is said to be accessible from $p \in M$ if for some $T>0$, the reachable set $R_T(p)$ contains a nonempty open set.

Since my definition of controllability is simply: for all $p \in M$ there exists a $T>0$ such that $R_T(p) = M$ , it is clear that controllability implies accessibility from every point $p$ according to the second definition.

However, having seen how many papers use either one of the two definitions of accessibility above nonchalantly, I believe (1) and (2) are equivalent. Intuitively it kinda makes sense (it'd be very weird for $R_T(p)$ to go from having an empty interior to a nonempty interior all of a sudden as we vary $T$--and accessibility after time $T$ implies accessibility for all later times, of course), but I'm stuck trying to prove it. I also couldn't find any authoritative references discussing the distinction, which I thought was weird.

Can you please help me prove these 2 definitions are equivalent? Thanks!

How are all Mersenne primes are of the form $27x^2+4y^2$ (except 3 and 7)?

Posted: 31 May 2021 08:09 PM PDT

I noticed something interesting with Mersenne primes numbers: You can write it with the form $27x^2+4y^2$ except for 3 and 7 but it seems to work with all other Mersenne primes numbers and their associated perfect numbers.

Another something interesting: it doesn't works for composite Mersenne Numbers like 2047 or 8388607.

Is there a way to explain that? I am unsure of how to start and am hence looking for some tips on how to start.

What's wrong with this Penrose pattern?

Posted: 31 May 2021 07:51 PM PDT

I programmed the Penrose tiling by projecting a portion of 5D lattice to 2D space, by the "cut and project" method described in

  1. Quasicrystals: projections of 5-D lattice into 2 and 3 dimensions, H. Au-Yang and J. Perk.
  2. Generalised 2D Penrose tilings, A. Pavlovitch and M. Kléman

The orthonormal basis is chosen as $$ M=\sqrt{\frac{2}{5}} \begin{bmatrix} \cos 0 & \cos \frac{2\pi}{5} & \cos \frac{4\pi}{5}& \cos \frac{6\pi}{5}& \cos \frac{8\pi}{5} \\ \sin 0 & \sin \frac{2\pi}{5} & \sin \frac{4\pi}{5}& \sin \frac{6\pi}{5}& \sin \frac{8\pi}{5} \\ \cos 0 & \cos \frac{4\pi}{5} & \cos \frac{8\pi}{5}& \cos \frac{12\pi}{5}& \cos \frac{16\pi}{5} \\ \sin 0 & \sin \frac{4\pi}{5} & \sin \frac{8\pi}{5}& \sin \frac{12\pi}{5}& \sin \frac{16\pi}{5} \\ \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}}\\ \end{bmatrix} $$ Each row presents a basis vector, i.e. $$ M_i\cdot M_j=0, \;\;\textrm{for } i\neq j.$$ and $$||M_i||=1, \;\;\textrm{for } 1\leq i \leq 5. $$

$M$ consists of the parallel operator (representing the physical space) $$ A=\begin{bmatrix} M_1\\ M_2 \\ \end{bmatrix}= \begin{bmatrix} \cos 0 & \cos \frac{2\pi}{5} & \cos \frac{4\pi}{5}& \cos \frac{6\pi}{5}& \cos \frac{8\pi}{5} \\ \sin 0 & \sin \frac{2\pi}{5} & \sin \frac{4\pi}{5}& \sin \frac{6\pi}{5}& \sin \frac{8\pi}{5} \\ \end{bmatrix} $$ and the perpendicular operator $$ B=\begin{bmatrix} M_2\\ M_3 \\ M_4 \\ \end{bmatrix}=\begin{bmatrix} \cos 0 & \cos \frac{4\pi}{5} & \cos \frac{8\pi}{5}& \cos \frac{12\pi}{5}& \cos \frac{16\pi}{5} \\ \sin 0 & \sin \frac{4\pi}{5} & \sin \frac{8\pi}{5}& \sin \frac{12\pi}{5}& \sin \frac{16\pi}{5} \\ \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}} & \frac{1}{\sqrt {2}}\\ \end{bmatrix} $$

The 5D lattice points are integer combinations of basis such as $$ p=i \begin{bmatrix} 1\\ 0\\ 0\\ 0\\ 0\\ \end{bmatrix} + j\begin{bmatrix} 0\\ 1\\ 0\\ 0\\ 0\\ \end{bmatrix} +\dots, \;\; i,j,\dots \in \mathbb{Z} $$

A 5D cube (centered at origin) is projected into 3D as polytope $$ v'= B v, \;\; v\in hypercube $$ so that I can check whether a $p$ is inside this polytope (20 faces). This is called "cutting" the 5D lattice points.

The resultant 2d projection $Ap$ is enter image description here

Everything works fine, however, my result differs from the "standard" one (e.g. in wiki page) as follows

enter image description here

Is this a mistake or an alternative view of the same tiling?

Finally, I find this image (from Vertex Frequencies in Generalized Penrose Patterns, by E. Zobetz and A. Preisinger)

enter image description here

where the center of standard tiling exhibits the "S" pattern, while the center of my version has the "ST" pattern. But what does it mean exactly?

Finding the value $k$ for which $p(k)$ (poisson distribution) is at a maximum

Posted: 31 May 2021 08:06 PM PDT

Let $ \mu >$ 0 be given and $p(k)=\frac{\mu^k}{k!}e^{-\mu}$ for $k = 0, 1, . . .$ Find the value of $k$ for which $p(k)$ is at a maximum.

My teacher gave two hints:

  • there are values of $\mu$ for which there are multiple values of $k$ for which $p(k)$ is at a maximum.

  • look at $\frac{p(k+1)}{p(k)}$

I don't know how i could use these hints for this problem.

If $H$ and $K$ are abelian normal subgroups of a group that intersect trivially, then prove $HK$ is abelian.

Posted: 31 May 2021 07:54 PM PDT

Let $H$ and $K$ are two abelian normal subgroups of a group and $H\cap K=\{e\}$. Then prove that $HK$ is also abelian.

My proof:

Results: $HK$ is a subgroup of a group if and only if $HK=KH$.

Now let $h_1k_1, h_2k_2 \in HK$ where $h_1,h_2\in H$ and $k_1,k_2\in K$.

We need to show $(h_1k_1)(h_2k_2)=(h_2k_2)(h_1k_1).$

So $(h_1k_1)(h_2k_2)=h_1(k_1h_2)k_2....(*)$.

Now $k_1h_2\in KH=HK$, so $k_1h_2\in HK$, therefore $k_1\in H$.

Since $H$ is abelian so $k_1h_2=h_2k_1$

So from $(*),$ $(h_1k_1)(h_2k_2)=h_1(k_1h_2)k_2=(h_1h_2)(k_1k_2)=(h_2h_1)(k_2k_1)=h_2(h_1k_2)k_1=(h_2k_2)(h_1k_1)$

Hence $HK$ is abelian.

Is my proof correct? Thanks.

No comments:

Post a Comment