Monday, March 29, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Maximal,minimal, greatest,least

Posted: 29 Mar 2021 08:09 PM PDT

If there is one element in the POset than what are the Maximal, Minimal, Greatest & Least element. Would it be the same element or we need to tak into account "fi" also?

Find out the convex hull of the set $\left\{\pm \mathbf{u} \mathbf{u}^{T} \mid\|\mathbf{u}\|=1\right\}$

Posted: 29 Mar 2021 08:03 PM PDT

Find the convex hull of the set $\left\{\pm \mathbf{u u}^{T} \mid\|\mathbf{u}\|=1\right\}$ and express it in a compact form.

According to the answer from @Cloudscape

  • The first step of finding the convex hull of a given set would be to visualize the convex hull and guess it.
  • The second step would be to prove your guess contains the set of which you wanted to find the convex hull.
  • Next, prove that your guess is convex.
  • Finally, prove that any convex set containing the set will include your guess.

But I am struggling with the first step, I don't know how to visualize the given set.

Additionally, I also wonder the specific meaning of "compact form".

Understanding geometric visualisation of a double point (fat point)

Posted: 29 Mar 2021 08:06 PM PDT

This question comes from Example 12.21 (a) of Gathmann's 2019 notes, here.

In the example, take $R = K[x]/(x^2)$ for some field $K$, so that $\operatorname{Spec} R$ is a single point $\mathfrak{p} = (x)$. In the third paragraph of the example, he says

Geometrically, one can think of $\operatorname{Spec} R$ as "a point that extends infinitesimally in one direction": As on the affine line $\mathbb{A}^1_K$, there are polynomial functions in one variable on $\operatorname{Spec} R$, but the space is such an infinitesimally small neighborhood of the origin that we can only see the linearization of the functions on it, and that it does not contain any actual points except 0.

I am a little confused by this wording, so any clarification on what he means, or otherwise how else to visualise $\operatorname{Spec} R$ would be appreciated.

I have tried to reason it as follows. The space $R$ can be thought of as a two-dimensional $K$-vector space, one basis element given by $1$ and another by $x$. Any $f \in R$ takes the form $f = a+bx$, so the distinguished opens $D(f)$ are either $\operatorname{Spec} R$ itself for $a \neq 0$, or the empty set for $a = 0$. In this way, the single point of $\operatorname{Spec} R$ lies only 'over' the entire 'constant axis' of $R$ minus the origin, which can 'collapse' into a fat point at the origin. However I'm not sure if I'm correct here - what does he mean by "the space" (bolded)? $\operatorname{Spec} R$ or $\mathbb{A}^1_K$ or something else?

Optimal Mixed Strategies- Game Theory

Posted: 29 Mar 2021 07:55 PM PDT

I am reading the book Game Theory by E.N.Barron. (p.13)

One of the properties for optimal strategies says:

If $Y$ (the mixed strategy for the second player) is optimal for II and $y_{j}>0,$ then $E(X, j)=$ value $(A)$ for any optimal mixed strategy $X$ for I. Similarly, if $X$ is optimal for $I$ and $x_{i}>0,$ then $E(i, Y)=$ value $(A)$ for any optimal $Y$ for II. Thus, if any optimal mixed strategy for a player has a strictly positive probability of using a row or a column, then that row or column played against any optimal opponent strategy will yield the value. This result is also called the Equilibrium Theorem.

X,Y are mixed strategies for I, II players respectively, and are probability vectors.

I can hardly find any version of this claim. Is there any connection to Nash-Equilibrium?

Thank you.

How to prove that $ \int_0^{\infty } \frac{\psi ^{(0)}(z+1)+\gamma }{z^{15/8}} \, dz = \frac{2 \pi}{\sqrt{2-\sqrt{2}}} \zeta(\frac{15}{8}) $?

Posted: 29 Mar 2021 07:53 PM PDT

This integral is from an exercise in an old calc book by Edwards from the 20s.

The $\psi$ is the Digamma function and the $\zeta$ is the Riemann Zeta function. How would one go about trying to prove this? Does this require just basic substitution? Thank you for your time.

Mathematica shows that the values are equivalent and is about $14.611879190740$

Weird Power tower convergence

Posted: 29 Mar 2021 07:52 PM PDT

Sup people Mr Beast back for another math vlogz,

I know the convergence for $x^{x^{x^{\cdots}}}$ has bounds, which I can work out through derivatives, but I have been wondering for something similar: $x^{2x^{3x^{4x^{\cdots}}}}$. I have been wondering, for what values of $x$ is this equation convergent?

Thanks

Mr Beast

Subscroob to my channel

Finite set of points as hypersurfaces in $\mathbb{A}^2(\mathbb{R})$

Posted: 29 Mar 2021 07:49 PM PDT

We know that finite set of points $(a_1,b_1),\ldots,(a_n,b_n)$ in $\mathbb{A}^2(\mathbb{R})$ are hypersurfaces. In deed, if we consider $$f=\prod_{i=1}^n((x-a_i)^2+(y-b_i)^2),$$ then we would have that $\{(a_1,b_1),\ldots,(a_n,b_n)\}=V(f)$.

My question here is, ¿Is it possible to obtain the same result for $f$ irreducible? So that every finite set of points in $\mathbb{A}^2(\mathbb{R})$ is a hypersurface of an irreducible polynomial.

Confusion about different confidence interval calculated

Posted: 29 Mar 2021 07:46 PM PDT

I recently computed an independent t test question in Spss using the parameters below:

Group 1: n = 70, mean = 4.3226, SD = 1.11731, SEM = 0.13354

Group 2: n = 269, mean = 4.4117, SD = 1.14620, SEM = 0.06988

The 95% CI calculated was (-0.39006,0.21188)

However,when I swap the 2 groups around, the 95% CI becomes (-0.21188,0.39006).

While I understand that the range is the same, the values aren't, and there's an overlap between the 2 CIs. Which CI do I use?

Show $\sqrt{5 + \sqrt{21}} = \frac{1}{2}\sqrt 6\sqrt{14}$

Posted: 29 Mar 2021 07:48 PM PDT

Sifting through my old Galois Theory notes, I found a proof that $\sqrt{5 + \sqrt{21}} = \frac{1}{2}\sqrt 6\sqrt{14}$, but my proof was chicken-scratch so I can no longer decipher it.

Can somebody come up with a way of showing this?

combinatorics and over counting

Posted: 29 Mar 2021 07:57 PM PDT

Say I want to have a eight card hand consisting of three cards that share a rank, 3 cards that share a different rank, and two cards that share a different rank. How many different eight card hands do we have?

$$\binom{13}{1}\binom{4}{3}\binom{12}{1}\binom{4}{3}\binom{11}{1}\binom{4}{2}$$

I seem to be over counting by $2$. I was wondering why this is? And if someone could explain to be where and why I am over counting. Also, are my choices right?

Showing that a quadratic space embeds into its Clifford Algebra

Posted: 29 Mar 2021 07:49 PM PDT

Let $(V,q)$ be a quadratic space, and $\mathscr{T}(V)$ its associated tensor algebra. The quotient map $\pi_q:\mathscr{T}(V)\to \mathscr{T}(V)/\mathscr{I}_q(V)=C\ell(V,q)$, where $\mathscr{I}_q(v)$ is the two sided ideal generated by $v\otimes v+ q(v)1$, restricts to a injection $\mathscr{T}^1(V)\hookrightarrow C\ell(V,q)$.

I am aware that the proof in Lawson & Michelsohn is incorrect. I also understand that this question has already been answered here and here. However, I have a weak background in representation theory (I know the definition of a representation and that is about the extent of my knowledge), and the proof here just doesn't make sense to me. This result apparently is trivial, but I cannot find for the life of me an accessible (or elementary) proof. Can anyone supply me with a proof (or reference) for this, assuming minimal background?

How to solve the differential equation $\frac{dy}{dx} = \sin^2(x-y+1)$

Posted: 29 Mar 2021 08:09 PM PDT

I have to solve the following differential equation:

$$\frac{dy}{dx} = \sin^2(x-y+1)$$

My attempt was:

Let $E = x-y+1 \implies \frac{dE}{dx}=1-\frac{dy}{dx} \iff \frac{dy}{dx} =1- \frac{dE}{dx}$

And i got $1- \sin^2(E)= \frac{dE}{dx} $

separating variables

$dx=\frac{1}{1-\sin^2(E)}dE$

and integrating both sides,

$x+C=\tan(E)$, finally i got $y(x)= x - arctan(x+C) + 1$.

but when I derive both sides I don't get the original differential equation, therefore my answer is not correct. What is wrong?

Evaluate the Lebesgue integrals:

Posted: 29 Mar 2021 07:46 PM PDT

Throughout this problem the integrals written are Lebesgue integrals.

  1. $\lim_{n \rightarrow \infty} \int_0^1 \frac{ne^x}{1+n^2\sqrt{x}}dx$
  2. $\lim_{n \rightarrow \infty} \int_0^1 n\log(1+\frac{x^3}{n})dx$
  3. $\sum_{n=0}^\infty \frac{1}{n!} \int_1^2 (\log(x))^ndx$

Here is my attempt at these integrals.

  1. As $n \rightarrow \infty$ we see that for every $x \in [0,1]$ we have $\frac{ne^x}{1+n^2\sqrt{x}} \rightarrow 0$. Then for every $n$ and a.e. x we see that$|\frac{ne^x}{1+n^2\sqrt{x}}| \leq \frac{e}{\sqrt{x}}$ which is integrable over $[0,1]$. Then by Dominating Convergence Theorem is the integral just equal to $0$?

  2. For this one I think that the integrand tends to $x^3$ but I am having a hard time coming up with a dominating function. Any help would be appreciated.

  3. I think this one is neat (if I did it correctly). We have $$\sum_{n=0}^\infty\frac{1}{n!} \int_1^2 (\log(x))^n dx = \sum_{n=0}^\infty \int_1^2 \frac{1}{n!} (\log(x))^n dx,$$ notice that the integrand is nonnegative so by Monotone Convergence Theorem we can put the summation inside: $$\sum_{n=0}^\infty \int_1^2 \frac{1}{n!} (\log(x))^n dx = \int_1^2 \sum_{n=0}^\infty \frac{(\log(z))^n}{n!} dx= \int_1^2 e^{\log(x)} dx = \int_1^2x dx = 1.5$$

Please correct any mistakes that you see, thank you!

Is it possible to find two subgroups of $D_6$ such that $D_6$ is isomorphic to the direct product of such two subgroups?

Posted: 29 Mar 2021 08:09 PM PDT

I am wondering if we can find two nontrivial subgroups A and B of the dihedral group of order 6 such that D6 is isomorphic to the direct product $$A \times B?$$

I start with writing down nontrivial subgroups of $D_6$, which have orders {2,3,4,6,12}, and I suspect that the product of order 2 and 3 will do. But how can I prove it? Can someone please give me a pointer?

Probability of at least 1 sequence of 6 heads followed by 6 tails among $n$ tosses?

Posted: 29 Mar 2021 08:10 PM PDT

You flip a coin $n > 11$ times. What is the probability that you have at least 1 sequence with 6 heads followed by 6 tails.

I am completely lost on how to solve this in an elegant way. The approach that I am thinking of would involve the inclusion-exclusion principle. We'd examine the feasibility of each location in the sequence NOT being the start of the said sequence.

We can define $a_i$ to be the event that the said sequence does not begin at the $i$-th location. Then the probability we're looking for is

$$ 1 - P\left(\bigcup_{i=1}^{N - 11}a_i \right) $$.

To find $P\left(\bigcup_{i=1}^{N - 11}a_i \right)$, I believe I would need to invoke the inclusion-exclusion principle, and this will get pretty messy. Is there a simpler approach?

Writing $\mathbb{P}(X_n - X_{n-1} = 1 \mid X_{n-1} = x)$ as $\mathbb{P}(X_n - x = 1)$?

Posted: 29 Mar 2021 08:05 PM PDT

Suppose we have a Markov chain $\{X_n; n \geq 0\}$ where each variable takes on the states $S_X = \{0, 1, 2\}$.

I am unsure about which of the following is correct:

\begin{align} \mathbb{P}(X_n - X_{n-1} = 1 \mid X_{n-1} = x) &= \mathbb{P}(X_n - x = 1 \mid X_{n-1} = x) \tag{1} \\ \mathbb{P}(X_n - X_{n-1} = 1 \mid X_{n-1} = x) &= \mathbb{P}(X_n - x = 1) \tag{2} \end{align}

Both seem plausible to me. Even though I suspect $(1)$ is correct, I cannot justify why; intuitively, it seems that replacing $X_{n-1}$ by $x$ makes the condition $X_{n-1} = x$ redundant.

Which equality is correct and how can we give a convincing justification?

Why does $S = ([0,1] \times [0,1]) \cap \mathbb{Q}^2$ have no area?

Posted: 29 Mar 2021 08:10 PM PDT

I am currently in an Advanced Calculus 2 class and am using the C. H. Edwards "Advanced Calculus of Several Variables" text. In chapter 4 when discussing area in $\mathbb{R}^2$, the text says that the set $S = ([0,1] \times [0,1]) \cap \mathbb{Q}^2$ has no area. The reasoning the book gives is that given any set of non-overlapping rectangles $\{R_i'\}$ for $i=1,\dots,n$ where $R_i' \subseteq S$ for all $i$, $\sum_{i=1}^na(R_i') = 0$, and for any set of rectangles $\{R_i''\}$ for $i = 1,\dots, m$ where $S \subseteq \bigcup_{i = 1}^nR_i''$, $\sum_{i = 1}^ma(R_i'') \geq 1$. I understand if these statements are true, there can be no area for the set $S$, but I am having trouble understanding why for any $i$, $a(R_i') = 0$, and I also don't understand how $\sum_{i = 1}^ma(R_i'') \geq 1$. Any help would be appreciated!

Clarification: This is the definition of the area of a bounded set $S \subseteq \mathbb{R}^2$ given in our book:

Given a bounded set $S \subseteq \mathbb{R}^2$, we say that its area is $\alpha$ if and only if given $\epsilon > 0$, there exists both

  1. A finite collection $R_1',\dots,R_k'$ of nonoverlapping rectangles, each contained in $S$, with $\sum_{i=1}^ka(R_i') > \alpha - \epsilon$
  2. A finite collection $R_1'',\dots,R_l''$ of rectangles whose union contain $S$, with $\sum_{i=1}^la(R_i'') < \alpha + \epsilon$

If there exists no such number $\alpha$, we say that the set $S$ does not have area, or that its area is not defined.

Vector Space Notation question

Posted: 29 Mar 2021 07:55 PM PDT

I have a question on the notation used for vectors and a vector spaces. I'm using the Wiki page for Euclidean Space and Wiki for vectors. Definitions are provided and questions are identified in bold.

A free vector from the origin in it's simplest form: $$ \mathbf v = \vec v = \vec {0\mathbf v} = a_0\hat {\mathbf b_0} + ...+ a_n\hat {\mathbf b_1} $$ where the ordered basis is $B = \{\mathbf b_0, ...,\mathbf b_n\}$, $\mathbf b_n = (p_0,...,p_n) \in E$ and Euclidean Space $E = \mathbb{R}^n$.

A bound vector or pair of points, with initial point $P$ and terminal point $Q$ is: $$ \vec{PQ} = (P, Q) $$ where $P,Q \in E$.

A free vector is also defined on Wiki and as an equivalence class of bound vectors, where $\bumpeq$ is equipollence, defined as true when a parallelogram can be drawn between $a,b,P,Q$: $$ \mathbf w = [\vec {PQ}]= \{(a, b) : a \in E \land b \in E \land \vec{ab} \bumpeq \vec{PQ}\} $$ or I believe this is an equivalent statement: $$ \mathbf w = [\vec {PQ}] = \{(a, b) : a \in E \land b \in E \land ( b - a = Q - P)\} $$

It's my understanding when using the equivalence relation above, the vector addition operation is defined as another equivalence relation: $$ +_v = E \times E \rightarrow E $$ $$ +_v(\mathbf w_0, \mathbf w_1) = \{(a,c) : (a,b) \in \mathbf w_0 \land (b,c) \in \mathbf w_1\} $$

However, the definitions below do not appear to define $\vec{PQ}$ as a bound vector but as the following free vector: $$ \vec{PQ} = Q - P $$ Question do the above definitions look correct?

Vector space of free vectors: $$ \vec E = \{\vec{PQ} : P \in E \land Q \in E\} $$

$P$ is a point of $E$ then: $$ E = \{P + \mathbf v : \mathbf v \in \vec{E}\} $$ $$ P + \vec{E} = \{P + \mathbf v : \mathbf v \in \vec{E}\} $$ A line: $$ L = \{P + \lambda \vec{PQ} : \lambda \in \mathbb{R}\} $$ A line segment: $$ PQ = QP = \{P + \lambda \vec{PQ} : 0 \le \lambda \le 1\} $$

Finally, distance: $$ d(P, Q) = ||\vec{PQ}|| $$

which makes me suspicious that $\vec{PQ} = Q - P$.

March Madness: What Round?

Posted: 29 Mar 2021 08:01 PM PDT

A puzzle.

Tools: A blank piece of paper and a pencil.

A game is being played in a regular round of this tournament. (So not a play-in game or the Final Four.) You only know the seeding numbers of the two teams playing. What is the "easiest" way to determine which round it is?

Extension: What if a bracket started with more than 16 teams. e.g.Seed 223 is facing Seed 91 in a bracket with 512 teams. What round is it? (I may have used the term "bracket" incorrectly. I assume that a set of teams, seeded 1 to 2^n, is paired in an elimination tournament in the usual fashion.)

How do you solve for both $x$ and $y$ for $xy^2 -x^3y = 6$?

Posted: 29 Mar 2021 07:46 PM PDT

How do you solve for both $x$ and $y$ for the following equation?

$$xy^2 -x^3y = 6$$

I tried moving the terms everywhere but my brain is numb right now. Also is there a go to method to approach a problem like this?

I thank you in advance!

Prove that triangle $ BMN $ is equilateral and find the green area

Posted: 29 Mar 2021 08:07 PM PDT

$ABCD$ and $MNPD$ are squares. (a) Prove that the $ BMN $ triangle is equilateral. (b) Find the green area.

enter image description here

Can anyone give me a suggestion? If it is possible to resolve by barycentric coordinates, how would I do it?

Symmetry in the complex semisimple lie algebra - help to understand definition

Posted: 29 Mar 2021 07:52 PM PDT

I got stucked in the definition of "symmetry" in the chapter of Lie Algebras to understand later the root systems. Well in the script they used the following definition:
Let $\alpha \in V\setminus \{0\}$. A symmetry with vector $\alpha$ is an element of $s \in GL(V)$ with $$s(v)=v-\alpha^*(v)\alpha$$ for all $v \in V$ with $\alpha^*(\alpha)=2$. Now the book of Serre "Complex Semisimple Lie Algebra" gives us an other definition:
Let $\alpha \in V\setminus \{0\}$. One defines a symmetry with vector $\alpha$ to be any automorphism $s$ of $V$ satisfying the following two conditions:

(i) $s(\alpha) = - \alpha$

(ii) The set $H$ of elements of $V$ fixed by $s$ is a hyperplane of $V$.

I know don't see the relation between these two definitions. Especially the second point in the second definition confuses me a lot. Also what can I understand under the expression $\alpha^*(\alpha)$?

Many thanks for some help.

Partial derivative of conditional composite function with multivariate normal

Posted: 29 Mar 2021 07:49 PM PDT

I would like to find the partial derivative of $f(y)$ with respect to c where $y$ follows multivariate normal/Gaussian density $N(x(t),\sigma^2I_n)$ i.e.

$f(y)=(2\pi)^{-n/2}|\sigma^2I_n|^{-1/2}exp[-1/2(y-x(t))^T|\sigma^2I_n|^{-1}(y-x(t))]$

Here, $x(t)=\dfrac{\exp(z(t))}{\int_a^p \exp(z(s))\,\text{d}s}$ and $z(t)=b(t)^Tc + b(t)^TAu$. I could do it if $x(t)$ is a function of $c$, but I am kind of lost at the very beginning since $x(t)$ is a composite function of $c$ i.e $x(t)=g(h(c))$.

N.B. Here the dimensions: $A$ is $p*p$, $b(t)$ is $p*n$, $c$ & $u$ is $p*1$, so $z(t)$ is $n*1$. And $\sigma^2I_n$ is the $n*n$ variance-covariance matrix.

Transform a direction from world space to local space?

Posted: 29 Mar 2021 08:13 PM PDT

I've been reading this pdf about vector transformations and I don't quite understand how to implement it in a computer program. On page 10, it shows you how to transform a vector from one coordinate system to another, exactly how would this look on paper? Say I had two 3d vectors that represented directions and I wanted to get the relative direction of $vector A$ to $vector B$. So if $A = (0,0,0)$ and $B = (0,1,0)$, $C$ would also be $(0,1,0)$. If $A = (0,1,0)$, $C$ would then be $(0,0,-1)$. Would I make matrices of $A$ and $B$ and just multiply them together? Other places have talked about getting the dot product? What would the $i$ and $i'$ vectors be in this case? Thank you.

I'm pretty sure this is all the info I need but in case there is an obviously better way of doing it I will explain my project. It's a simple voxel 3D raycaster. I'm actually using Unity and rendering to a texture using a compute shader.

The way I am calculating the rays is I am centering the pixel coordinates as if they were 3D space coordinates, moving them a little forward and getting the directions from 0,0,0 to each of those points (and shrinking the dimensions for field-of-view). Then, I transform those directions relative to the player's transform every frame to get the ray directions for the GPU. Obviously, this also needs to be calculated on the GPU so I can't use the handy Unity method I used just to test if it would work (Transform.TransformDirection). So I guess I could also use a relative point transform function too and just send those points and the player's transform to the GPU.

Is a real matrix that is both normal and diagonalizable symmetric? If so, is there a proof of this not using the spectral theorem?

Posted: 29 Mar 2021 08:15 PM PDT

Given a quadratic real matrix $A$ for which we know it is diagonalizable ($D = P^{-1}AP$ for a diagonal matrix $D$) and that it is normal ($AA^T = A^TA$), is it true that $A$ is symmetric? By the spectral theorem over $\mathbb{C}$, $A$ is orthogonally diagonalizable (say via some unitary matrix $Q$). It then seems to me that from this we should get that the eigenspaces are orthogonal and hence there must also exist a real orthonormal basis of eigenvectors. Is this last correct? From this it would then follow by the real version of the spectral theorem that $A$ is symmetric.

If it is, is there a way to prove this statement directly, without appealing to the spectral theorem? This would give a nice justification/explanation for the difference in the two spectral theorems over $\mathbb{C}$ and $\mathbb{R}$ relating it directly to whether the matrix is diagonalizable in the first place.

When is $f\ne 0$ in the interior of $\text{supp}\ f$?

Posted: 29 Mar 2021 08:03 PM PDT

Question: Is $f\ne 0$ in the interior of $\text{supp}\ f$? $\color{blue}{\text{(Resolved).}}$


Question 2 (more interesting): If we can find a counterexample, then exactly what property characterizes those functions $f$ such that $f\ne 0$ in the interior of $\text{supp}\ f$? This would most likely also depend on $X$, i.e. it is locally compact and Hausdorff or not, etc. In my opinion, this is a very deep question, and my gut says that we should be able to find some interesting results which help us understand the behavior of functions from their support sets. $\color{red}{\textbf{(Unresolved).}}$


$\text{supp}\ f$ is the support of $f:X\to \Bbb C \text{ or }\Bbb R$, a real/complex valued function on a topological space $X$. $$\text{supp}\ f = \overline{\{x: f(x)\ne 0 \}} = \text{cl}\{x: f(x)\ne 0 \}$$ It is clearly possible that $z\in \text{supp}\ f$, but $f(z) = 0$ still. For example, let $f:\Bbb R\to\Bbb R$ given by $$f(x) = \begin{cases} x & 0\le x < \frac{1}{2} \\ 1 - x & \frac{1}{2} \le x \le 1 \\ 0 & \text{otherwise} \end{cases}$$ Here, $\text{supp}\ f = [0,1]$, but $f(0) = 0$ and $f(1) = 0$. However, what struck me was that $(0,1)$ is the interior of $[0,1]$, the support of $f$. So is it always the case that we take a point in the interior of the support of a real/complex-valued function $f$ on a topological space, $f$ is non-zero at that point?

Thanks a lot!

Complexity of true $\Pi_1$-sentences

Posted: 29 Mar 2021 07:48 PM PDT

The set of Gödel numbers of $\Pi_1$-sentences $\varphi(x)$ such that $\varphi(0)$ is true is not recursive. Is it recursively enumerable?

Progress on a conjecture of Burnside...

Posted: 29 Mar 2021 08:06 PM PDT

Given a group $G $, the set of automorphisms of $G $ also forms a group, $\rm {Aut}(G) $,with composition as the operation.

An inner automorphism is one determined by conjugation by some element $g\in G $. That's we have the automorphism $i_g $ given by $i_g (h)=ghg^{-1}\,\forall h $.

I learned from a comment by @LeeMosher on this site that Burnside once conjectured that any class-preserving automorphism is inner. Does anyone know about any progress on this?

Of course the converse is trivial.

(Btw, the reference here is to conjugacy classes. Of course the class equation gives the sizes of these classes. For example, in the case of a finite abelian group, the class equation consists in all ones.)

For reference, this would have been early in the $20$-th century.

Tricks in research vs. contest math

Posted: 29 Mar 2021 07:49 PM PDT

I know that research math problems are unsolved and unseen, while contest math problems can be solved. All bolds are mine.

  1. How do 'tricks' in research math differ from those in contest math?

  2. Do the meanings of 'tricks' differ between the 3 citations? The 1st two argue against 'tricks' (in contest math) but Prof. Gowers refers to 'tricks' (in research math)?

In The Map of My Life [...] Goro Shimura wrote of his experience teaching at a cram school

I discovered that many of the exam problems were artificial and required some clever tricks. I avoided such types, and chose more straightforward problems, which one could solve with standard techniques and basic knowledge. There is a competition called the Mathematical Olympic, in which a competitor is asked to solve some problems, which are difficult and of the type I avoided. Though such a competition may have its raison d'être, I think those younger people who are seriously interested in mathematics will lose nothing by ignoring it.

[...]

In his Mathematical Education essay, Fields Medalist William Thurston said

Related to precociousness is the popular tendency to think of mathematics as a race or as an athletic competition. There are widespread high school math leagues: teams from regional high schools meet periodically and are given several problems, with an hour or so to solve them.

There are also state, national and international competitions. These competitions are fun, interesting, and educationally effective for the people who are successful in them. But they also have a downside. The competitions reinforce the notion that either you 'have good math genes', or you do not. They put an emphasis on being quick, at the expense of being deep and thoughtful. They emphasize questions which are puzzles with some hidden trick, rather than more realistic problems where a systematic and persistent approach is important. This discourages many people who are not as quick or as practiced, but might be good at working through problems when they have the time to think through them. Some of the best performers on the contests do become good mathematicians, but there are also many top mathematicians who were not so good on contest math.

[...]

In his book Mathematics: A Very Short Introduction, Fields Medalist Timothy Gowers writes

This last quality is, ultimately, more important than freakish mental speed: the most profound contributions to mathematics are often made by tortoises rather than hares. As mathematicians develop, they learn various tricks of the trade, partly from the work of other mathematicians and partly as a result of many hours spent thinking about mathematics. What determines whether they can use their expertise to solve notorious problems is, in large measure, a matter of careful planning: attempting problems that are likely to be fruitful, knowing when to give up a line of thought (a difficult judgement to make), being able to sketch broad outlines of arguments before, just occasionally, managing to fill in the details. This demands a level of maturity which is by no means incompatible with genius but which does not always accompany it.

Every nonsingular $m\times m$ matrix is row equivalent to identity matrix $I_m$

Posted: 29 Mar 2021 08:06 PM PDT

Could anyone give an easy to understand proof of this fact or at least tell if the proof below is correct?

We know that every matrix is row equivalent to a reduced row echelon form. An $m\times m$ square matrix $A$ of rank $m$ is row equivalent to an identity matrix from the definition of reduced row echelon form:

1) all non-zero rows are above rows of all zeros

2) the leading coefficient is strictly to the right of the leading coefficient of the row above

3) the leading coefficient is $1$ and is the only non-zero number in its column

Plus, there's a theorem saying that the rank of any matrix equals the number of non-zero rows in its reduced row echelon form.

Now we know that $A$ has $m$ non-zero rows, then every row contains a leading coefficient $1$ that is shifted to the right with respect to the leading coefficient in the row above. The only way we can arrange $m$ 'ones' in this manner is by placing them on diagonal (we can prove it by noting the the required number of 'shifts to the right' of the leading coefficients when going one row down is $m$ - actually - is it clear enough for a proof?). And we know there is nothing except $1$ in any of those columns - we have a diagonal matrix.

Can you make it shorter without referring to some more complicated terminology?

No comments:

Post a Comment