Saturday, April 3, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Generalized Hertzsprung Problem

Posted: 03 Apr 2021 08:09 PM PDT

The Hertzsprung Problem goes as follows: In how many can we place exactly $n$ non-attacking kings on a $n \times n$ chessboard such that there is exactly $1$ king in each row and column where $n \in \mathbb{N}$.

It turns out that there exists a nice closed for the same (though painful to evaluate for any $n \geqslant 5$) which is

$$n!+\sum_{k=1}^n {(-1)^k}(n-k)!\sum_{i=1}^k 2^{i} \binom{k-1}{i-1}\binom{n-k+1}{i}$$

which can be obtained using simple Principle of inclusion-exclusion and stars and bars argument.

However what if I want to place exactly $2$ kings in each row and column on a $n \times n$ chessboard such that no two kings attack each other on the chessboard? In how many ways can I do this? I tried this for a long time and was able to find a tedious closed form using some restricted permutations but I'm not satisfied with it. Does anyone have any idea if there exists a nice closed form for this? If yes, then which argument did you use?

I call this Generalized Hertzsprung problem: In how many ways can we place non-attacking kings on a $n \times n$ chessboard such that there are exactly $m$ kings in each row and column where $m, n \in \mathbb{N}$?

Any advice on how to approach the generalized version? Your help would be highly appreciated. Thanks.

How do I compute the minimum or maximum LINE of a bivariate quadratic function?

Posted: 03 Apr 2021 08:06 PM PDT

Given a bivariate quadratic function: $$ y = g(x, w) = a x^2 + b x w + c w^2 + d x + e w + f $$ I think I know that if $b^2 - 4 a c < 0$, then it has a single point minimum (if $a > 0$) or maximum (if $a < 0$) at: $$ x = \frac{2 c d - b e}{b^2 - 4 a c} , w = \frac{2 a e - b d}{b^2 - 4 a c} $$ Please correct me if that is incorrect.

But if $b^2 - 4 a c = 0$ and $2 c d - b e = 0$ and/or(?) $2 a e - b d = 0$, then it has a minimum (if $a > 0$) or maximum (if $a < 0$) line rather than a single point. What I do not know how to do is to compute that line in terms of $a, b, c, d, e,$ and $f$, similar to how I can compute the single point above. Can anyone tell me how to compute that line? Thanks!

Proof that the dirac delta function is a member of $H^{-1}(\mathbb{R})$

Posted: 03 Apr 2021 07:57 PM PDT

I see this fact commonly stated but I haven't been able to find a formal proof of this fact anywhere.

Thanks.

Find point B on line L given points AC where angle ABC equals angle X

Posted: 03 Apr 2021 07:56 PM PDT

Trying to work out this tricky trigonometry question for a game I am programming. My maths knowledge is basic so if possible please keep the answer as basic as possible.

Need the formula to calculate point B on an infinite line L given two points AC where the angle ABC is equal to a specified angle X.

I understand also that there are normally two points with the specified angle X on the line L

Image to explain better:

enter image description here

Thank you.

Is $S^1\times S^2$ isomorphic to $\mathbb{R}P^3$?

Posted: 03 Apr 2021 07:54 PM PDT

My question here is simple. Is it the case that $S^1\times S^2\cong \mathbb{R}P^3$?

This seems like this should be true for a couple of reasons. First, $\mathbb{R}P^3\cong SO(3)$ and every rotation is uniquely determined by an axis, and an angle about that axis, i.e. an element of $S^2$ and an element of $S^1$. It is also the case that $SO(3)/SO(2)\cong SO(3)/S^1\cong S^2$. However, I can't seem to find a way to show that the two spaces are topologically isomorphic.

If they're not isomorphic, then how exactly are $S^1\times S^2$ and $\mathbb{R}P^3$ related?

Misunderstanding proof of bound on chromatic number of integer distance graphs

Posted: 03 Apr 2021 08:03 PM PDT

Given a subset $D$ of positive integers, we let the distance graph $G(Z,D)$ be the graph with vertex set $\mathbb{Z}$ and where an edge connects two integers whose absolute difference is in $D$.

This paper contains the following theorem: If $D$ is finite, then $\chi(G(Z,D))\le |D|+1$.

Here is the proof the paper provides:

We color the vertices in $G(Z,D)$ recursively as follows. First, $f(0)=1$. When $f(j)$ are defined for $-i\le j\le i$, let $f(i+1)$ be the minimum positive integer not in
$A=\{f(j):-i\le j\le i$ and $i+1-j\in D\}$ and $f(-i-1)$ the minimum positive integer not in $B=\{f(j):-i\le j\le i $ and $j+i+1\in D\}$. $f$ is clearly a proper coloring for $G(Z,D)$. Note that $|A|\le |D|$ and $|B|\le|D|$ imply that $f(i+1) \le |D|+1$ and $f(-i-1) \le |D|+1$. So $f$ is a proper $(|D|+1)$-coloring. $\blacksquare$

I am able to follow the proof until the second to last sentence. I agree with the implication given, but the I don't see why the condition $|A|\le |D|$ and $|B|\le |D|$ should be true. Indeed, if $D=\{1,\ldots, n\}$ and $i>n$, then isn't $|A|=|B|=|D|+1>|D|$?

This paper/theorem is cited by other papers on the same topic, so I assume the proof must be correct. What am I misunderstanding?

Action of centralizer on fixed-point submanifold

Posted: 03 Apr 2021 07:48 PM PDT

Let $X$ be a manifold on which a discrete group $\Gamma$ acts properly such that the quotient $\Gamma\backslash X$ is compact. For $g\in\Gamma$, let $X^g$ denote the set of points in $X$ fixed by $g$. Then $X^g$ is a submanifold of $X$.

The centralizer $Z_\Gamma(g)$ acts naturally on $X^g$, since for any $x\in X^g$, $h\in Z_\Gamma(g)$, we have $$g(hx)=h(gx)=hx,$$ whence $hx\in X^g$.

I would like to better understand the action of $Z_\Gamma(g)$ on $X^g$.

Question 1: Does $Z_\Gamma(g)$ always act effectively on $X^g$?

Question 2: Are there natural conditions one can impose so that $Z_\Gamma(g)$ acts freely on $X^g$ (besides requiring the action of $\Gamma$ on $X$ to be free)?

Does any polygon with side number 2n >= 2 form a torus when all pairs of opposite sides are joined? (works for n=2, 3)

Posted: 03 Apr 2021 07:47 PM PDT

Wikipedia's Eisenstein integer; Quotient of C by the Eisenstein integers says:

The quotient of the complex plane C by the lattice containing all Eisenstein integers is a complex torus of real dimension 2. This is one of two tori with maximal symmetry among all such complex tori.[citation needed] This torus can be obtained by identifying each of the three pairs of opposite edges of a regular hexagon. (The other maximally symmetric torus is the quotient of the complex plane by the additive lattice of Gaussian integers, and can be obtained by identifying each of the two pairs of opposite sides of a square fundamental domain, such as [0,1] × [0,1].)

This answer explains and shows how identifying opposite sides of a hexagon produces a torus, as do answers to What surface do we get by joining the opposite edges of a hexagon?

Question: Bringing opposite sides of a quadrilateral and a hexagon both produce a torus. Does any polygon side number 2n >= 2 form a torus when all pairs of opposite sides are joined?


From the linked answer:

from http://www.math.cornell.edu/~mec/Winter2009/Victor/part1.htm

Image source: http://www.math.cornell.edu/~mec/Winter2009/Victor/part1.htm

The pre-image of $a$ with $|a|>1$ contains exactly one point.

Posted: 03 Apr 2021 07:43 PM PDT

Let $G=\{z\in\mathbb{C}: |z-2|<1\}$ and assume f is holomorphic function on the closed disk $\overline{G}$ expect for a simple pole $z_0\in G$. If $|f(z)|=1$ for every $z\in \overline{G}$ then show for every complex number with $|a|>1$, $f^{-1}(a)$ contains exactly on point.

My idea was to define a curve for which the function $z\to f(z)-a$ had no zeros inside and then use the argument principle to conclude that $\frac{1}{2\pi i} \int_{\gamma} \frac{f'(z)}{f(z)-a}dz=1$, however, I was not able to define the curve and I was having trouble using some of the assumptions of the problem. Any help would be very appreciate it.

Integrating $\sec^2{x}$ by parts

Posted: 03 Apr 2021 08:03 PM PDT

NB: yes, I know it is a common integral which equals $\tan{x}$, but if I brute force integration by parts, it must work, right? It is not working for me.

Firstly I have

$$\int \:\frac{1}{\cos^2x}dx=\int \:\:\frac{-\sin x}{-\sin x\cos^2x}\,dx$$

And after integration by parts with differentiating $\frac{1}{-\sin x}$ and integrating $\frac{-\sin x}{\cos^{2}x}$ I obtain erroneously

$$\frac{1}{\sin x\cos x}+\int \frac{1}{\sin^2x\cos x}$$

This did not seem to be close to $\tan{x}$, and sure enough the result came out to be $\frac{1}{\sin \left(x\right)\cos \left(x\right)}-\csc \left(x\right)+\ln \left|\tan \left(x\right)+\sec \left(x\right)\right|+C$

What went wrong?

Edit: I should've obtained $\frac{1}{\sin x\cos x}+\int \frac{1}{\sin^2x}$

Most even-degree polynomials in $\mathbb{R}[x]$ are reducible

Posted: 03 Apr 2021 07:36 PM PDT

Say $f(x) \in \mathbb{R}[x]$ has degree $n \geq 3$. If $n$ is odd, then $f(x) \to + \infty$ as $x \to +\infty$ and $f(x) \to - \infty $ as $x \to -\infty$ (or the other way around), so $\exists c \in \mathbb{R}$ such that $f(c) = 0$ by the intermediate value theorem, so $f(x) = (x - c)g(x)$ for a nonconstant polynomial $g(x) \in \mathbb{R}[x]$, so $f(x)$ is reducible.

The same is true if $n$ is even: $f(x)$ is still reducible. Can we prove this without using the fact that $\mathbb{C}$ is algebraically closed? Is there any intuitive reason that an even-degree polynomial in $\mathbb{R}[x]$ is reducible if it is not quadratic? The preceding paragraph gives a simple explanation for why odd-degree polynomials behave this way.


Here is a proof using algebraic closure. First, we must prove that $\mathbb{C} \cong \mathbb{R}[x]/(x^2 + 1)$ is indeed algebraically closed. Let $f \in \mathbb{C}[x] - \mathbb{C}$, and take $M$ to be the splitting field of $f$ over $\mathbb{C}$, then take $N$ to be a normal extension of $\mathbb{R}$ containing $M$, so that $\mathbb{R} \subset \mathbb{C} \subseteq M \subseteq N$; it suffices to prove that $N = \mathbb{C}$. Let $H$ be a Sylow 2-subgroup of $\mathrm{Gal}(N/\mathbb{R})$, so that $[\mathrm{Gal}(N/\mathbb{R}) : H] = [N^H : \mathbb{R}]$ is odd -- in fact, $[N^H : \mathbb{R}] = 1$ by the first paragraph of this post, so $H = \mathrm{Gal}(N/\mathbb{R})$, so $[N : \mathbb{C}]$ is a power of $2$ since it divides $|H|$. If $N \neq \mathbb{C}$, then there is a degree-$2$ extension of $\mathbb{C}$, but no such extension can exist because every element of $\mathbb{C}$ has a square root. Therefore $N = \mathbb{C}$ as desired.

Now suppose there is an irreducible, monic polynomial $f(x) \in \mathbb{R}[x]$. As $\mathbb{C}$ is algebraically closed, $\mathbb{C}$ contains a root $\alpha$ of $f$, so $[\mathbb{R}(\alpha) : \mathbb{R}] \leq [\mathbb{C} : \mathbb{R}] = 2$. Also $\mathbb{R}(\alpha) \cong \mathbb{R}[x]/(f)$ via a field isomorphism that is the identity on $\mathbb{R}$, so $$ \deg f = [\mathbb{R}[x]/(f) : \mathbb{R}] = [\mathbb{R}(\alpha) : \mathbb{R}] \leq 2 $$

what is $ g_s(x)?$

Posted: 03 Apr 2021 07:36 PM PDT

I have some confusion in Rudin Book page No : $39$

I think there is a typo on page $39$

My confusion is given below marked in red circle

enter image description here

I think $$g_{s}(x)=\begin{cases}1, & x\in \bar{V_s} \\ S, & x\notin \bar{V_s}. \end{cases} $$ is not correct

My thinking :It should be like this $$g_{s}(x)=\begin{cases}s, & x\in \bar{V_s} \\ 0, & x\notin \bar{V_s}. \end{cases} $$

Am i right or wrong ?

Proof of the second symmetric derivative using L'Hôpital

Posted: 03 Apr 2021 07:51 PM PDT

So, I need to prove the symmetric second derivative

$f''(x_0)=\lim_{h\to0}\frac{f(x_0+h)+f(x_0-h)-2f(x_0)}{h^{2}}.$

And I want to prove it using L'Hôpital:

$\lim_{h\to0}\frac{f(x_0+h)+f(x_0-h)-2f(x_0)}{h^{2}} = \lim_{h\to0}\frac{f'(x_0+h)-f'(x_0-h)}{2h} = \lim_{h\to0}\frac{f''(x_0+h)+f''(x_0-h)}{2} = f''(x_0).$

I do know how to use L'Hôpital's Rule, but I have a few questions like..... where is the $-2f(x_0)$ term that just vanished? why is the middle sign changing from + to - and then to + again? I guess all I need is to see a few more steps before getting to the final result.

Any help will be appreciated!

$\{0,1\}^{\omega}$ is not compact

Posted: 03 Apr 2021 07:28 PM PDT

$\{0,1\}^{\omega}$ is not compact. I proved a more difficult result that $[0,1]^{\omega}$ is not compact in the box topology and was told my proof was similar to this. However, I think it would be good practice to try this result as well. $\{0,1\}$ is in the discrete topology. Let $C$ be the collection of all points in $\{0,1\}^{\omega}$. Consider the open cover of $\{0,1\}^{\omega}$ given by $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}$, where for each $x=(x_n)_{n \in \mathbb{N}}$ in $C$, a corresponding open set $\prod_{n \in \mathbb{N}}U_{nx}$ containing it is defined by

$$U_{nx}=\begin{cases} \{1\} &\text{ if } x_n=1 \\ \{0\} & \ \text{if} \ x_n=0\end{cases}$$

Then $x \in \prod_{n \in \mathbb{N}}U_{nx}$ and if $y \in C-\{x\}$ there is an index $k$ such that $x_k \notin U_{ky}$ and $x \notin \prod_{n \in \mathbb{N}}U_{ny}$. So if $\prod_{n \in \mathbb{N}}U_{nx}$ is taken out of the cover, the element $x \in \{0,1\}^{\omega}$ is not covered by $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}-\{\prod_{n \in \mathbb{N}}U_{nx}\}$. Since $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}$ is an infinite cover, and taking one element of this collection out results in a noncover, $\{0,1\}^{\omega}$ is not compact.

One thing I had difficulty showing is that the collection $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}$ actually covers $\{0,1\}^{\omega}$. How could I show that the union of elements in the cover is equal to $\{0,1\}^{\omega}$ explicitly? Also, would this be the correct way to do this proof? I modeled the solution after this solutions, in which I was provided help. Proving $[0,1]^{\omega}$ in box topology is not compact

Inverse limite has two name

Posted: 03 Apr 2021 07:24 PM PDT

Why do we call projective or inverse limite? If I've understand there are two name for the same object so I would glad to know if there is a simple explanation of this.

Maximizing correct(or incorrect) labeling and prove its bound.

Posted: 03 Apr 2021 07:29 PM PDT

Suppose there are M workers labeling N items (items have true label +1/-1). For each worker, it has $\frac{1+p_i}{2}$ $(-1\le p_i\le 1)$ successful probability to recover the true label of an item.

Given the context, now we have a $N\times M$ random matrix $F$ where $F_{ij}$ is the label of item $i$ by worker $j$. My interest is: finding an algorithm to return N label where, with 90% probability, we have $O(\frac{\ln(MN)}{M})$ correct labels or $O(\frac{\ln(MN)}{M})$ incorrect labels.

My thought is to find the column with $max\{|p_i|\}$ so that this worker's labels are at least consistently right/wrong.

Current strategy is to find column sum, sqraue it, subtracted by $N$, then take absolute value, and use the column with highest value.

Some basic reasoning (starting point), for column $j$ (use $l$ to denote label):

$$E((\sum\limits_{p=1}^n F_{pj})^2)=E(\sum\limits_{p,q=1}^n F_{pj}F_{qj})=E(N+\sum\limits_{p\neq q}^n F_{pj}F_{qj})=N+p_j^2\sum\limits_{p\neq q}^n l_pl_q$$

The last step holds since $E(F_{pj}F_{qj})=l_pl_q((\frac{1+p_j}{2})^2+(\frac{1-p_j}{2})^2)-2l_pl_q((\frac{1+p_j}{2})(\frac{1-p_j}{2}))=l_pl_q p_j^2$ for $p\neq q$

So this looks good, as expectation shows some indication of $p_j^2$: after subtracting $N$ the expectation reveals the magnitude of $p_j^2$, but I have struggled for a few weeks to prove, by doing this algorithm, I can get the desired bound.

And hints/helps are super appreciated.

Two dissection problems for rectangles

Posted: 03 Apr 2021 08:10 PM PDT

Let us consider two integer (that is, with sides of integer length) rectangles $S$ and $T$ of the same area. Then, obviously, $S$ can be dissected into several integer rectangles $A_1$, ..., $A_n$ (we will call such a dissection a tiling, and the rectangles constituting it tiles) which can be rearranged to form $T$ (e.g., dissect $S$ into unit squares).

Question #1. Is there any non-constant lower bound for the size of such a tiling?

(Size of the tiling is its cardinality, the number of its tiles)

As this sounds rather vague, let me formulate a very specific question (perhaps even excessively specific).

Question #1.1. Let $p < q < r < s$ be four different prime numbers, $S$ --- rectangle with dimensions $pq \times rs$, T --- rectangle with dimensions $pr \times qs$. Does there exist a function $f(p,q,r,s)$ which a) serves as a lower bound for the above-mentioned tiling size; b) tends to infinity if all four arguments tend to infinity?

Another (similar but diferent) interesting question of a similar nature (also quite specific):

Question #2. Given a natural number $n$, let $S$ be the rectangle with dimensions $n \times (n+1)$, and $T$ the rectangle with dimensions $(n+1) \times n$. Is there a non-constant lower bound (as a function of $n$) for the size of a tiling of $S$ whose tiles can be then moved without rotating to form $T$?

Perhaps we should concentrate first on the following simplified version of Question 2.

Question #2.1. If $n = 1000000$, then is it possible to dissect an $n \times (n+1)$ rectangle into less than $20$ integer rectangles which can be moved (without rotation) to form an $(n+1)\times n$ rectangle?

Note. Don't try to dissect into square tiles. It can be proved (although the only elementary proof I know is neither very short nor very elementary) that $(n+1) \times n$ rectangle cannot be dissected into less than $\log_2(n)$ square tiles.

Ways of solving the following recurrence relation system.

Posted: 03 Apr 2021 07:28 PM PDT

Consider the following system of linear recurrence relations.

$$\begin{aligned} p_n &= a \cdot p_{n-1} - c_{n-1}\\ c_{n-1} &= p_{n-1} - b \cdot p_{n-2} + c_{n-2}\end{aligned}$$

with $p_0 = 1$ and $c_1 = 1$.

I've tried to represent $p_n$ as a finite linear combination of $p_{k}, k < n$, but this doesn't work for me. Maybe there is any chance to represent the final solution?

Any ideas?

Uniform convergence of $C^1$ functions up to the boundary

Posted: 03 Apr 2021 07:30 PM PDT

In Section 1.4 of the book "Partial Differential Equations in Action: From Modelling to Theory" by Sandro Salsa, I encountered the following proposition (rephrased):

Let $\Omega$ be a bounded, open, connected set in $\mathbb{R}^n$. Consider a sequence of functions $u_m \in C^1(\overline{\Omega})$ (the notation here means $u_m$ has continuous partials that can be extended continuously up to $\partial \Omega$). If $u_m(x_0)$ converges at some $x_0\in \Omega$, and $\nabla u_m$ converges uniformly to some $F \in C^1(\overline{\Omega})$ , then $u_m$ converges uniformly to some $u \in C^1(\Omega)$ , with $\nabla u = F$.

My attempt at proof: The uniform convergence of $\nabla u_m$ implies uniform boundedness of $\nabla u_m$. This would allow proving the equicontinuity of $u_m$ (which is both necessary and sufficient for establishing the result) by computing a line integral, IF the domain $\Omega$ has a nice shape, e.g. convex. For general domain $\Omega$, one could still manage to prove the equicontinuity for some $\epsilon$ distance away from the boundary, by covering with small balls, but it doesn't seem possible to extend it all the way to the boundary. For example, consider the domain in this example.

Question: how does one prove this proposition in general? (Or does it even hold in general?)

Functional equations satisfied by both sine and tangent functions.

Posted: 03 Apr 2021 08:01 PM PDT

The functional equation identity $$ f(a)f(b)f(a\!-\!b)+f(b)f(c)f(b\!-\!c)+f(c)f(a)f(c\!-\!a)+f(a\!-\!b)f(b\!-\!c)f(c\!-\!a) = 0. $$ is satisfied by $f(x)= k_1\sin(k_2\,x)$ and $f(x)=k_1\tan(k_2\,x)\,$ where $\,k_1,k_2\,$ are arbitrary constants.

As a limiting case of both sine and tangent, $\,f(x)=k_1\,x\,$ is the simplest solution.

I know of a dozen identities similar to this one. They all have common features. Each is a polynomial equated to zero where each monomial term in the polynomial is a product of factors each of which is of the form $\,f(x)\,$ where $\,x\,$ is a variable or a linear combination of variables. I am interested in those which are satisfied by both $f=\sin$ and $f=\tan$. There are an infinite number of such, but there may be a way to characterize such functional equation identities and perhaps find a basis for them. It would be nice to find explicitly a few of the simplest such basis identities where simple is defined by the number of variables, terms and degree of the polynomial.

How far is this possible to do?


NOTE: In all but one of the identities of this kind that I know of, it is also satisfied by $\,f(x)=\text{sn}(x,m),\,$ the Jacobi elliptic function $\,\text{sn}\,$ as well the related $\,\text{sc}, \text{sd}\,$ functions. The one exception is for the Jacobi Zeta function.

NOTE: The dozen identities I refer to are in my file Special Algebraic Identities (ident04.gp).

Multiplicative version of "summation"

Posted: 03 Apr 2021 07:30 PM PDT

Repeated sum is denoted using $\sum$ and is called "summation." What is the name for the analogous process with multiplication, denoted $\prod$?

What are some interesting and unusual theorems, identities, notations in Trigonometry?

Posted: 03 Apr 2021 07:25 PM PDT

Context/Motivation

When studying a subject I usually like to create a document and make a kind of study guide, notes about the subject. I created the document and write using LaTeX. Also, besides learning about the subject itself (it is math or something related to robotics) I practice LaTeX and English because I write it in English. Taking trigonometry, the topic of this question, I would write lecturing myself and go through concepts, theorems, formulas detailing as much as possible. Yesterday night I was wondering about making notes about trigonometry and it reminded me of a notation that I did find unusual when I first saw it, but it was interesting regarding the Domain of the tangent function.

Working with trigonometry function such that $f:\mathbb{R}\rightarrow\mathbb{R}$, we have $f(x)=\tan(x)=\dfrac{\sin(x)}{\cos(x)}$. As $\cos(x)\neq 0$ the domain of the tangent function is usually written as $\operatorname{Dom}(\tan) = \left\{x: x \neq \dfrac{\pi}{2}+k\pi, k \in \mathbb{Z} \right\}$.

We have $\cos(x) = 0$ for $x=\pm\dfrac{\pi}{2}, \pm\dfrac{3\pi}{2}, \pm\dfrac{5\pi}{2}...$

In fact, taking the intervals we have $$... \quad \cup\left(-\dfrac{3\pi}{2},-\dfrac{\pi}{2}\right) \cup \left(-\dfrac{\pi}{2},\dfrac{\pi}{2}\right)\cup \left(\dfrac{\pi}{2},\dfrac{3\pi}{2}\right)\cup \quad ...$$

For $k\in\mathbb{Z}$. Considering $x_k = \dfrac{\pi}{2}+k\pi = \dfrac{2k\pi+\pi}{2}= \boxed{\dfrac{(2k+1)\pi}{2}}$, we can note that the intervals are determined by these $x_k$ points, so

$$(x_k, x_{k+1})= \left(\dfrac{(2k+1)\pi}{2}, \dfrac{(2k+3)\pi}{2}\right)$$

Once we have an infinity quantity of unions, we can represent it using the big union notation. This is a notation I was not familiar, that is why I found it very interesting. Therefore, we have

$$\operatorname{Dom}(\tan) = \left\{x: x \neq \dfrac{\pi}{2}+k\pi, k \in \mathbb{Z} \right\} = \bigcup_{k=-\infty}^{\infty} \left(\dfrac{(2k+1)\pi}{2}, \dfrac{(2k+3)\pi}{2}\right)$$

Question

I'm afraid this question might be off-topic but I am asking this here because it seems to be a topic where experience is very important. I am looking identities, theorems, concepts, notations that are not very common, usual in regular books, textbooks. It is just out of my curiosity, but I am really interested in these new things. I'm sure the example I gave above might not be impressive for university students or most people who use MSE, but I was very surprised when I first saw it.

More precisely, I am looking for answers of the following kind:

Trigonometric Identities that are unusual that might/might not have interesting applications;

Notations that are not very common;

Concepts, Theorems that are usually not seem is regular high-school but actually presents interesting ideas;

University trigonometry concepts with interesting ideas/application in high schools trigonometry;

The question is certainly very wide regarding the topic, but any interesting information is very welcomed.

How do I learn more advanced mathematics?

Posted: 03 Apr 2021 07:42 PM PDT

I am a junior at high school (16.1 years old); although I perform at the top of my class and have some competitive achievements in the nation, I think I am both very far from the IMO and college-level mathematics. I am struggling to move on with mathematics since there isn't any clear pathway I could find. For the past year, I wanted to learn (this halt in my learning process is the case for the past 2-3 years where I didn't learn anything new in school, but also I didn't manage my time and didn't learn anything on my own time) calculus but kept on procrastinating... I don't think I have great planning skills, but I do think I have adequate mathematical skills... I really want to pursue mathematics in the future. What should I learn, and how? Do you have any suggestions? Is it too late for me to become a "good" mathematician in the future? I don't want to be in constant depression, thinking how I won't become a "good" mathematician since I am not intelligent enough and I don't learn enough; honestly, I feel like this cycle is not only prevented me from learning but also psychologically consumed me. Although this probably not the right place to ask these questions, I wanted to give it a try.

How do I solve this functional equation question?

Posted: 03 Apr 2021 08:10 PM PDT

The Question. $\text{There exist functions }f:\rm I\!R \rightarrow \rm I\!R \text{ such that the following holds:}\\$ $$f \left(\frac{x-3}{1+x}\right)+f \left(\frac{x+3}{1-x} \right)=x$$ $\text{Then the average value of}$ $$\sum_{k=2}^{100}\frac{f(k)}{k}$$ $\text{can be expressed in the form } \frac{m}{n} \text{, where } m \text{ and } n \text{ are relatively prime integers and } n \text{ is positive. Find } 100m+n. \text{(For example, if the solutions to that FE were } f(x)\equiv 0\text{ and } f(x)\equiv-x \text{ then the average value of that sum would be}-\frac{5049}{2}\text{ and the answer would be } 2-504900=-504898.)$

So far I just found that $f(0)=0, f(-3)=3, f(3)=-3,$ but even with a quadratic regression I am pretty sure that I am not on the right track. How do I solve this?

Symmetrization and Contraction Principle of Random Variables

Posted: 03 Apr 2021 08:05 PM PDT

I was reading a paper and came across the terms symmetrization and contraction principle of random variables. I tried to extract the statements as follows:

Symmetrization: Let $X_1,\dots,X_n$ be independent zero-mean random variables and $p\geq 2$, then $$\left\|\sum_{i=1}^n a_i X_i\right\|_p \leq 2 \left\|\sum_{i=1}^n a_i \varepsilon_i X_i\right\|_p$$ where $a_i$ are real numbers and $\varepsilon_i$ denote a sequence of symmetric independent Rademacher random variables (also independent of the $X_i$'s).

Contraction Principle: Let $X_1,\dots,X_n$ be independent non-negative random variables and $p\geq 2$. Further, suppose for each $i$, we have $\mathbb{P}(Y_i\geq t)\geq \mathbb{P}(X_i\geq t)$ for all $t>0$, where $Y_1,\dots, Y_n$ are also non-negative random variables. Then we have $$\left\|\sum_{i=1}^n a_i \varepsilon_i X_i\right\|_p \leq \left\|\sum_{i=1}^n a_i \varepsilon_i Y_i\right\|_p$$ where $a_i$ are real numbers and $\varepsilon_i$ denote a sequence of Rademacher random variables.

There might be more general statements of these results that exist but the paper does not really cite them, and I am having trouble finding references to exact statements and proofs. If anybody can provide a reference (preferably a textbook) or a hint, that would be greatly appreciated!

For reference, the paper and argument cited is here, on page 12.

Edit: I am looking for a reference to read, not a direct solution or anything like that.

Separating scaling, rotation and sheer coefficients correctly in this case

Posted: 03 Apr 2021 07:27 PM PDT

This question follows-on from Correct the Fourier transform of a shear-distorted image? ($x_{new} = x + \alpha y$). I have a group of distorted $x, y$ points and have found a matrix $A$ that corrects their positions:

$$ \begin{bmatrix} x' \\ y' \\ \end{bmatrix} = \begin{bmatrix} a_{xx} & a_{xy} \\ a_{yx} & a_{yy} \\ \end{bmatrix} \begin{bmatrix} x \\ y \\ \end{bmatrix} $$

I now want to break this up into three separate terms, one for pure scaling or magnification $M$, one for pure rotation $R$, and one for shear $S$. I think they will look like this:

$$M = \begin{bmatrix} m_x & 0\\ 0 & m_y \\ \end{bmatrix} $$

$$R = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix} $$ $$S = \begin{bmatrix} 1 & s_{xy} \\ s_{yx} & 1 \\ \end{bmatrix} $$

I see there are answers to

and I recognize that those answers may apply to my problem, but they are beyond me; I need a simpler, more applied answer, or at least assistance getting there.

Q1: I can imagine setting $A$ equal to the product, but I'm worried if the order matters. Will this order work? Would any order work equally as well?

$$A = R \ S \ M $$

Q2: I now have five parameters instead of four. Since I've separated out rotation, should I now use only one shear, i.e. I have to choose only one of $s_xy$ or $s_yx$ and set the other to zero? Or just use $s_{xy} = s_{yx} = s$?

$$S = \begin{bmatrix} 1 & s \\ s & 1 \\ \end{bmatrix} $$

Prove by contrapositive

Posted: 03 Apr 2021 07:53 PM PDT

$$\forall a,b \in \Bbb Z \qquad ab>0 \implies \frac {a}{b} + \frac {b}{a} \geq 2$$

How do I prove this by contrapositive? So far, I came to $(a-b)^2 \leq 0$

Is there hope for Einstein tensor notation in Quantum Mechanics?

Posted: 03 Apr 2021 07:51 PM PDT

The Core Question

How would one define and justify the tensor notation for vectors and operators on a separable Hilbert space?

Motivation

Whenever I work with finite-dimensional linear algebra, I find it very illuminating to think about vectors, linear maps, bilinear forms etc. collectively as just various types of tensors. I like how the tensor formalism puts emphasis on representation-free formulations and how even a very complicated multi-linear map can be understood in terms of its individual „indices".

When I started to learn Quantum Mechanics, I immediately tried to use the tensor notation. At a surface level it seemed to work just fine. One can formally rewrite the bra-ket expressions on left with a „tensor-like" notation on the right: $$ \begin{aligned} \left< \psi \mid \varphi \right> \qquad &\leftrightsquigarrow \qquad \overline{\psi^{\,\mu}} \; g_{\overline{\mu} \nu} \; \varphi^\nu \\ \left| \psi \right> = \hat A \hat B \left| \varphi \right> \qquad &\leftrightsquigarrow \qquad \psi^{\,\mu} = A^\mu_\nu \; B^\nu_\kappa \; \varphi^\kappa \\ W = \left| \psi \right>\left< \psi\right|, \; \operatorname{Tr} W = 1 \qquad &\leftrightsquigarrow \qquad W^\mu_\nu = \psi^{\,\mu} \; g_{\overline{\kappa}\nu} \; \overline{\psi^\kappa}, \;\; W^\mu_\mu = 1 \end{aligned} $$ Furthermore, the tensor notation feels more natural when dealing with composite systems, where the state is a tensor product of states of the underlying systems. However, when I tried to put the notation on a more rigorous ground, I learned that the topic is much more difficult and nuanced than I thought.

The Problem

In finite-dimensional spaces, the tensor notation is made possible by several circumstances: \begin{gather} V^* \otimes V \simeq \operatorname{Hom}(V, V) \label{homomorphism} \tag{1} \\[5pt] T(\mathbf{v}) = T( \; \sum_k v^k \mathbf{e}_k \;) = \sum_k v^k \; T(\mathbf{e}_k) \label{continuity} \tag{2} \\ \big| \operatorname{Tr}(T) \, \big| < \infty \label{trace} \tag{3} \end{gather} While \eqref{homomorphism} essentially makes sure that all $k$-covariant $l$-contravariant tensors are from the same space, \eqref{continuity} lets us describe all tensors using their coefficients wrt. some basis and \eqref{trace} lets us express all operations on tensors using just tensor product and contraction.

In a sense, neither of these is true for the separable Hilbert space $\mathcal H$ where QM is done. By a basis on a Hilbert space one usually means the Schauder basis, which describes vectors with an infinite series of coefficients. This, combined with the fact that most interesting operators in QM aren't continuous, gives us: $$ T(\mathbf{v}) = T( \; \sum_{k=1}^\infty v^k \mathbf{e}_k \;) = T( \; \lim_{N\to\infty}\sum_{k=1}^N v^k \mathbf{e}_k \;) \neq \lim_{N\to\infty} T( \; \sum_{k=1}^N v^k \mathbf{e}_k \;) = \sum_{k=1}^\infty v^k \; T(\mathbf{e}_k) $$ The sequence on the RHS might either diverge, or even converge to a different value than LHS (although this is pathological and usually not considered in practise). Condition \eqref{trace} only holds for the so-called trace-class operators (not even the identity is trace-class) and the space $\mathcal H^* \widehat{\otimes} \mathcal H$ (tensor product of Hilbert spaces) is isomorphic to finite-rank operators, a proper subset of $\operatorname{Hom}(\mathcal H, \mathcal H)$.

A Possible Solution? (and more questions)

These unfortunate circumstances mean, that if it's even possible to justify the tensor notation in infinite-dimensional spaces, it can't be by a simple generalization of the finite-dimensional case. So, is it possible to make it work?

The condition \eqref{continuity} seems relatively easy to fix – one just needs to replace the Schauder basis with a Hamel basis. A serious downside to this would be that a typical operator would have an uncountable number of coefficients.

For \eqref{homomorphism} it would be great if one could find a space $\mathcal G$, such that the Hilbert space $\mathcal H$ is embedded in it $\mathcal H \subset \mathcal G$, with the property $\mathcal G \otimes \mathcal G \simeq \operatorname{Hom}(\mathcal H^*, \mathcal H)$. Vectors of the bigger space could then be used to construct operators and bilinear forms. Here, one immediately thinks about the Gelfand triple $\Phi \subset \mathcal H \subset \Phi^*$ – could it be that $\mathcal G = \Phi^*$ for an appropriate choice of rigging? Sadly it appears that no, as the Schwartz kernel theorem says that $\Phi^* \otimes \Phi^* \simeq \operatorname{Hom}(\Phi, \Phi^*)$. Is such a space $\mathcal G$ even possible? If yes, would this approach generalize to tensors of higher degree?

Finally, the problem with \eqref{trace} seems to be there to stay. There is no reasonable way to define the trace of identity, for example. What I'm interested in is whether, given a complicated expression, one can tell a priori which indices are contractible and which will inevitably diverge.

Are these proposed "solutions" any good? Is there any literature that would investigate this topic? Or am I on the wrong path and should I stop wasting my time with tensors in the infinite dimension?

Show that every ideal of a ring R is the kernel of some ring homomorphism

Posted: 03 Apr 2021 08:04 PM PDT

Theorem - Show that every ideal of a ring R is the kernel of some homomorphism R to some other ring.

My proof: Let A be an ideal of $R$

Define: $f : R \rightarrow R/A$

$f(r) = r + A , \forall r \in R$

To show: f is ring homom. (and well defined) and then to show $ker(f) = A$

$f$ is well defined: $x=y$ then $x + A = y + A$, then $f(x) = f(y)$.

$f(x+y) = (x+y) + A = (x + A) + (y + A) = f(x) + f(y)$

$f(xy) = (xy) + A = (x+A)(y+A) = f(x)f(y)$

To show: $ker(f) = A$

Proof: $kerf = \{x \in R | f(x) = 0 + A\} = \{x \in R | x + A = 0 + A\} = \{x \in R | x + A = A\} = \{ x \in R | x \in A \}$. Thus, $ker(f) = A$.

My question: I'm not sure the proof is true, does $f : R \rightarrow R/A$ generalizes to 'any ring homomorphism'?

How many different sudoku puzzles are there?

Posted: 03 Apr 2021 07:28 PM PDT

The answer: 6, 670, 903, 752, 021, 072, 936, 960, according to this site:

https://www.technologyreview.com/2012/01/06/188520/mathematicians-solve-minimum-sudoku-problem/

I have tried to get this number using direct methods but basically I have found the question too hard. There are too many possibilities, but I am likely missing some strategic ways to solve the problem. Any help would be greatly appreciated, thanks.

No comments:

Post a Comment