Saturday, November 13, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


OLYMPIAD MATH QUESTION - Area of trapezoid given all sides

Posted: 13 Nov 2021 09:50 PM PST

I was going through some Olympiad Maths and found this question:

Given a trapezoid with its upper base $5$ cm long, lower base $10$ cm long, and its legs are $3$ and $4$ cm long. What is the area of this trapezoid?

Yeah, I know. There are equations to calculate this, I found some equations on Math Stack Exchange too.

What I don't understand is that this is an Olympiad question. The proofs that I saw to create the formulae did not look like something that should appear in an Olympiad question. Am I missing something, or do I actually need to create my own formula to solve this question? Keep in mind that this is a timed test; if I was actually taking this test, I would have to solve this in 2 minutes maximum.

I don't quite sure what "U(0, i), i = 1, 2" means, is it true that pdf of Y is also 1/y with y=1,2 as same as that of X

Posted: 13 Nov 2021 09:44 PM PST

The probability distribution of the random variable X is P(X = 1) = P(X = 2) = 1/2. Given X = i, the random variable Y follows uniform distribution U(0, i), i = 1, 2. 1.Find the cumulative distribution function FY (y).

Prove the correctness of Decryption in RLWE baesd Cryptosystem

Posted: 13 Nov 2021 09:40 PM PST

I get stuck in the proof of decryption correctness in RLWE based Cryptosystem. To state where I am , let me show the full scheme first. The image is from chapter 3.2 of this paper. enter image description here

And the decryption correctness proof of the scheme follows enter image description here

In this proof , I can get the second last equation in decryption procedure , i.e. $$\mathbf{m} + (t/q)(\mathbf{v}-\epsilon \cdot \mathbf{m}) + t\cdot \mathbf{r} $$

But for the last equation I don't know why. $$(t/q)||\mathbf{v}-\epsilon \cdot \mathbf{m}|| \lt 1/2 $$

I hava some clues. We already have $||\mathbf{v}|| \le 2\cdot \delta_R \cdot B^2 + B$, then for $2\cdot \delta_R \cdot B^2 + B \lt \Delta / 2$, we have $||\mathbf{v}|| \lt \frac{q}{2t}$ since $\Delta = \lfloor q/t \rfloor \le q/t$. Hence $(t/q)||\mathbf{v}|| \lt \frac{1}{2}$. This is very similar to what we want ,i.e. $(t/q)||\mathbf{v}-\epsilon \cdot \mathbf{m}|| \lt 1/2 $.
I guess there is a relation between $||\mathbf{v}||$ and $||\mathbf{v}-\epsilon \cdot \mathbf{m}||$ , but I don't know how to build the relation between them. The proof in the paper have a short explanation "Since $\mathbf{m} \in R_t$" , but I can not understand. Anyone gives some hint would be helpful.

Plus, the norm in this paper in infinity norm.

PS: I am wondering whether to put this question in Cryptography Stack Exchange or Math, but since this question is more math involved , I guess it should be in here.

On nth cyclotomic polynomial over F.

Posted: 13 Nov 2021 09:30 PM PST

Just after "55.2 Definition" in the book A First Course in Abstract Algebra - JB Fraleigh it comes the following claims without proofs :

Let $K$ be the splitting field of $x^n - 1$ over $F$. Since an automorphism of the Galois group G(K / F) must permute the primitive nth roots of unity, we see that $\Phi_n(x)$ is left fixed under every element of G(K / F) regarded as extended in the natural way to K[x]. Thus $\Phi_n(x) \in F[x]$.

My questions:

1- G(K / F) permutes zeros of $x^n - 1$. How to prove G(K / F) permutes zeros of $\Phi_n(x)$, i.e. none of elements of G(K / F) maps a zero of $\Phi_n(x)$ to a zero of $x^n - 1$ that is a zero of $\Phi_n(x)$?

2- How the phenomenon in question 1 leads $\Phi_n(x) \in F[x]$, i.e. how this inclusion holds?

Linear transformation of a line

Posted: 13 Nov 2021 09:32 PM PST

Question:

Find the image of the line $y=5x+9$ by rotation with center $O(0,0)$ with rotation angle of $270^{\circ}$ followed by dilatation at the center $O(0,0)$ with a scale factor of $3$.

My Attempt:

Given the line $y=5x+9$, it passes through $(-1.8,0)$ and $(0,9)$. I used these two points to perform the rotation of $270^{\circ}$ about the origin. When rotated anti-clockwise, the image of these two points becomes $(0,1.8)$ and $(9,0)$ respectively. With a dilatation at the center with a scale factor of $3$, these points becomes $(0, 5.4)$ and $(27,0)$, hence the image of the line $y=5x+9$ becomes $y=-0.2x+5.4$ or $10y=-2x+54$.

When rotated clockwise, the image of the line $y=5x+9$ becomes $y=-0.2x-5.4$ or $10y=-2x-54$.

I would like to check if the answer/ approach is correct for this question? If not correct, how should I approach it. Thank you.

How Dehn-like is this function?

Posted: 13 Nov 2021 09:19 PM PST

This question is about a variant of the Dehn function, where roughly speaking we count switches between different relations rather than each use of a relation at all.


Setup

Given a set $U$ and equivalence relations $\sim_1,...,\sim_n,\sim$ on $U$ with $\sim$ being the transitive closure of the union $\bigcup_{1\le i\le n}\sim_i$, let $\delta: U^2\rightarrow\mathbb{N}$ be the function given by

$$\begin{cases} \delta(a,b)=0 & \mbox{ if }a=b\mbox{ or }a\not\sim b,\\ \delta(a,b)=\min\{k:\exists c_1,i_1,...,c_k,i_k[a=c_1\sim_{i_1}c_2\sim_{i_2}...c_k\sim_{i_k}b]\} & \mbox{ otherwise.}\\ \end{cases}$$

Roughly speaking, $\delta(a,b)$ counts how many times we need to switch between the individual $\sim_i$s to establish $\sim$-equivalence of $a$ and $b$. Now if additionally we have a length function on $U$, that is a finite-to-one map $l:U\rightarrow\mathbb{N}$, we can further define a function $$d: \mathbb{N}\rightarrow\mathbb{N}: m\mapsto\max\{\delta(a,b):a,b\in l^{-1}(\{0,1,...,m\})\}.$$


Question

Now given a finitely presented group $G=\langle a_1,...,a_m\vert R_1,...,R_n\rangle$, consider the $d_G$ defined as above with:

  • $U$ being (the underlying set of) the free group on $m$ generators $a_1,...,a_m$;

  • $\sim_1,...,\sim_n$ being the congruences on $U$ corresponding to the relators $R_1,...,R_n$ respectively, and $\sim$ being the transitive closure of their union; and

  • $l$ being the usual length function on $U$ (= length of the shortest "word" resulting in the given group element).

This $d_G$ is at first glance vaguely like $G$'s Dehn function $D_G$. However, I'm not actually sure how similar they are. Unlike $D_G$ the function $d_G$ essentially allows us to apply the same relation as many times as we want in a row with no additional cost, and it seems to me that this could result in very different behavior.

Suppose $d_G$ has polynomial-bounded growth rate. Must $D_G$ also have polynomial-bounded growth rate? (Of course trivially $d_G\le D_G$.)

Ultimately I'm actually interested in variants of $d_G$ in algebras, in the universal-algebra sense, other than groups. Roughly speaking, any time we have a finitely generated algebra $\mathcal{A}$ and congruence relations $\approx_1,...,\approx_n$ on $\mathcal{A}$ we can build a corresponding function $d$. This feels like a potentially nice generalization of the Dehn function which could measure how "badly entangled" the $\approx_i$s are ... but I'm not sure how related to the Dehn function it actually is, hence this question. The universal algebra tag is relevant only to the motivation of this question.

An application of Perfectoid to Class Field Theory

Posted: 13 Nov 2021 09:15 PM PST

I am the beginner of number theory, so I apologize for that I ask the rough question. I have read the definition of Perfectoid field and so on and I have felt Perfectoid is number theoretic object. On the other hand, there is one important number theoretic concept where is Class Field Theory. Though it's simple idea, it seems that Perfectoid might be applied to Class Field Theory. Is there an application of Perfectoid to Class Field Theory? Thanks in advance.

find a linear mapping to map copanar points to coplanar points

Posted: 13 Nov 2021 09:14 PM PST

I have a set of coplanar points ($X_1$, $X_2$, .., $X_k$) in 4-dim space. I want to find a linear mapping to map these points to 3-dim space and the resultant points ($X'_1$, $X'_2$, .., $X'_k$) are also coplanar on the 3-dim space.

I have tried the following but it does not work:

Pick 4 coplanar points $X_1$, $X_2$, $X_3$ and $X_4$ which are not colinear, form $V_1$ = $X_2$ - $X_1$, $V_2$ = $X_3$ - $X_1$, and $V_3$ = $X_4$ - $X_1$, then perform Gramm-Schmitt procedure on $V_1$, $V_2$ and $V_3$ to get 3 orthonormal vectors $e_1$, $e_2$, and $e_3$. Then construct the linear mapping by concatenating the 3 orthonrmal vectors together. $M$ = [$e_1$ $e_2$ $e_3$]$^T$.

However, I find that the resultant points ($X'_1$=$M$$X_1$, $X'_2$=$M$$X_2$, .., $X'_k$=$M$$X_k$) are not coplanar on the 3-dim space.

Any idea how to construct a proper linear mapping that I want?

On divisibility by large primes for a streak of consecutive integers.

Posted: 13 Nov 2021 09:28 PM PST

Is there a constant $M \in \mathbb{N}$ so that for all primes $p \geq M$, $k \geq 2, k \in \mathbb{N}$ there exists a prime $q_{p,k} \geq p$ for which $q_{p,k}$ divides $\frac{(k+p-2)!}{(k-1)!}$?

If we temporarily allow $k = 1$ in the question then the answer is no as $(p-1)!$ is not divisible by any prime that is at least $p$. This question is clearly true for the prime $2$. This is true for the prime $3$ as $(k+1)(k)$ is never a power of $2$ when $k \geq 2$ (as one of $k,k+1$ is odd and $\geq 3$).

Note that we have the following; For all primes $p \geq 3$, $k \geq \text{lcm}(1,...,p-1)$ there exists a prime $q_{p,k} \geq p$ for which $q_{p,k}$ divides $\frac{(k+p-2)!}{(k-1)!}$. This can be figured out with set theory.

$f(x^2)=f(x)+f(-x)$

Posted: 13 Nov 2021 09:15 PM PST

Is there only one function (or class of functions) for which $f(x^2)=f(x)+f(-x)$? I know $\ln(1-x)$ fits the identity... I tried finding $f(xy)$, hoping that it equals $f(x)+f(y)$ because then it would be a logarithm, to no avail.

Thanks

Edit, the domain of the function should be $(-1,1)$

Prove that the Gaussian integers Z [i] form an integral domain

Posted: 13 Nov 2021 09:15 PM PST

This is all I have done. Is this correct? please explain.

Proof. 0 = 0 + 0i is in Z [i] : Also note that 1 = 1 + 0i is in z[i].

Then (a + bi)(c + di) = (ac) + (bd)i is in Z [i] and (a + bi)(c + di) = (ac-bd) + (ad + bc)i is in Z [i]. Therefore Z [i] is a subring of C.

Hence Z [i] is a commutative ring with unity. Furthermore, if (a + bi) (c + di) = 0; then (as elements of the integral domain C) either a + bi = 0 or c + di = 0. Therefore Z [i] is an integral domain.

An observation on the non trivial zeta zeros

Posted: 13 Nov 2021 09:26 PM PST

I found something interesting that happens at the zeros on the critical line. I would like your feedback. Is it right? Is it well known? Apologies if this is a bit on the longer side.

Real and Imaginary parts of $\eta(s)$

Let $s = a + bi$ and $\eta(s) = x + yi$. We can simplify $\eta(s)$ to get its real and imaginary parts as follows:

$$\begin{align} x &= \sum\limits_{j=1}^\infty\frac{(-1)^{j+1} \cos (b\log{j})}{j^a} \\\\ y &= \sum\limits_{j=1}^\infty\frac{(-1)^{j} \sin (b\log{j})}{j^a} \end{align}$$

Square of the magnitude of $\eta(s)$

We can compute the square of the magnitude $m_a = x^2 + y^2 $ of $\eta(s)$.

$$\begin{align} x^2 &= \sum\limits_{j=1}^\infty\frac{ \cos^2 {( b\log{j} )} } { j^{2a} } + 2\sum\limits_{\substack{j\geq2 \\\\ k>j}}^\infty\frac{ (-1)^{j+k} \cos{(b\log{j}) }\cos{(b\log{k})} }{ (jk)^a} + 2\sum\limits_{j=1}^\infty\frac{ (-1)^j\cos{(b\log{j}) } } {j^{a}} \\\\ y^2 &= \sum\limits_{j=1}^\infty\frac{ \sin^2 {( b\log{j} )} } { j^{2a} } +2 \sum\limits_{\substack{j\geq2 \\\\ k>j}}^\infty\frac{ (-1)^{j+k} \sin{(b\log{j}) }\sin{(b\log{k})} }{ (jk)^a} \\\\ m_{a} &= x^2 + y^2 \\\\ &= \sum\limits_{j=1}^\infty\frac{ 1}{ j^{2a} } + 2\sum\limits_{\substack{j\geq2 \\\\ k>j}}^\infty\frac{ (-1)^{j+k} \cos(b\log{\frac{j}{k}})}{ {(jk)}^a} + 2\sum\limits_{j=2}^\infty\frac{ (-1)^{j+1}\cos{(b\log{j}) } } {j^a} \\\\ &= \sum\limits_{\substack{j = 1 \\\\ k = 1}}^\infty\frac{ (-1)^{j+k} \cos(b\log{\frac{j}{k}})}{ (jk)^a} \end{align} $$

Matrix representation

$m_a$ can be represented as a matrix $R[a, b]$ as below.

R[a,b] = \begin{bmatrix} \frac{ \cos(b\log{\frac{1}{1}})} { (1*1)^a} & \frac{ -\cos(b\log{\frac{1}{2}})}{ (1*2)^a} & \frac{ \cos(b\log{\frac{1}{3}})}{ (1*3)^a} & \frac{ -\cos(b\log{\frac{1}{4}})}{ (1*4)^a} & \dots \\\\ \frac{-\cos(b\log{\frac{2}{1}})}{ (2*1)^a} & \frac{\cos(b\log{\frac{2}{2}})}{ (2*2)^a} & \frac{-\cos(b\log{\frac{2}{3}})}{ (2*3)^a} & \frac{\cos(b\log{\frac{2}{4}})}{ (2*4)^a} & \dots \\\\ \frac{\cos(b\log{\frac{3}{1}})}{ (3*1)^a} & \frac{-\cos(b\log{\frac{3}{2}})}{ (3*2)^a} & \frac{\cos(b\log{\frac{3}{3}})}{ (3*3)^a} & \frac{-\cos(b\log{\frac{3}{4}})}{ (3*4)^a} & \dots \\\\ \frac{-\cos(b\log{\frac{4}{1}})}{ (4*1)^a} & \frac{\cos(b\log{\frac{4}{2}})}{ (4*2)^a} & \frac{-\cos(b\log{\frac{4}{3}})}{ (4*3)^a} & \frac{\cos(b\log{\frac{4}{4}})}{ (4*4)^a} & \dots \\\\ \vdots \end{bmatrix}

Few observations

  1. $R[a,b]$ is a $n$ x $n$ square matrix where $n \to \infty$
  2. $R[a,b]$ is a symmetric matrix since $\cos (b \log \frac{j}{k}) = \cos (b \log \frac{k}{j})$
  3. The trace of the matrix $R[\frac{1}{2}, b]$ is the harmonic series. It is interesting to see the harmonic series play a role in the zeros of $\eta(s)$
  4. The sum of elements in row 1 is the real part of $\eta(s)$
  5. For a zero on the critical line,
    1. The sum of all elements in the matrix is zero. This is saying that the magnitude of $\eta(s) = 0$ (naturally).
    2. The sum of elements in row 1 is zero which is saying that the real part of $\eta(s)$ is zero.
    3. The sum of elements in every row is also equal to zero. Said differently, the sum of elements of each row excluding the diagonal element for that row is equal to the negative of the diagonal element of that row. The sum of the non-diagonal elements in row $i$ offsets the $i^{th}$ term in the harmonic series. The proof is shown in the next section.
    4. Let $S$ represent the sum of all elements of $R[\frac{1}{2}, b]$ \begin{align} \begin{split} S &= \sum\limits_{j = 1}^\infty \frac{(-1)^{j+1} \cos(b\log{\frac{j}{1})}}{ (1j)^a} \\ &+ \sum\limits_{j = 1}^\infty \frac{(-1)^{j+2} \cos(b\log{\frac{j}{2})}}{ (2j)^a} \\ &+ \sum\limits_{j = 1}^\infty \frac{(-1)^{j+3} \cos(b\log{\frac{j}{3})}}{ (3j)^a} \\ &+ \dots \end{split} \end{align} Each term above represents a sum of a row of the matrix and each sum is equal to zero. In other words, \begin{align} \begin{split} 1 &= \sum\limits_{j = 2}^\infty \frac{(-1)^{j} \cos(b\log{\frac{j}{1})}}{ (1j)^a} \\\\ \frac{1}{2} &= \sum\limits_{\substack{j = 1 \\\\ j \neq 2}} \frac{(-1)^{j+1} \cos(b\log{\frac{j}{2})}}{ (2j)^a} \\\\ \frac{1}{3} &= \sum\limits_{\substack{j = 1 \\\\ j \neq 3}} \frac{(-1)^{j} \cos(b\log{\frac{j}{3})}}{ (3j)^a} \\\\ \vdots \\\\ \frac{1}{k} &= \sum\limits_{\substack{j = 1 \\\\ j \neq k}} \frac{(-1)^{j+k-1} \cos(b\log{\frac{j}{k})}}{ (jk)^a} \end{split} \end{align}

Proof that sum of each row is zero

We know that for a zero on the critical line $x = 0, y = 0, a = \frac{1}{2}$. Consider row j of the matrix $R[\frac{1}{2}, b]$. Let $S_j$ represent the sum of all elements of row j (where j is assumed even WLOG). In what follows we use the fact that \begin{align}\cos(b\log{\frac{j}{k}}) = \cos (b \log{j}) \cos (b \log{k}) + \sin (b \log{j}) \sin(b \log{k}) \end{align}

and for a zero on critical line, we have \begin{align} \begin{split} 1 - \frac{\cos (b \log{2})}{\sqrt{2}}+ \frac{\cos (b \log{3})}{\sqrt{3}} +\dots - \frac{\cos (b \log{j})}{\sqrt{j}} + \dots &= 0 \\\\ \frac{\sin (b \log{2})}{\sqrt{2}} - \frac{\sin (b \log{3})}{\sqrt{3}} +\dots + \frac{\sin (b \log{j})}{\sqrt{j}} -\dots &= 0 \end{split} \end{align}

\begin{align} \begin{split} S_j &= \sum\limits_{\substack{k = 1}}^\infty\frac{ (-1)^{j+k} \cos(b\log{\frac{j}{k}})}{ \sqrt{jk}} \\\\ &= \frac{ -\cos(b\log{\frac{j}{1}})}{ \sqrt{j}} + \frac{\cos(b\log{\frac{j}{2}})}{ \sqrt{2j}} - \frac{ \cos(b\log{\frac{j}{3}})}{ \sqrt{3j}} + \frac{ \cos(b\log{\frac{j}{4}})}{ \sqrt{4j}} - \dots \\\\ &= \frac{1}{j} - \frac{\cos(b\log{j})}{ \sqrt{j}} \left( \sum\limits_{\substack{k = 1 \\ k \neq j}}^\infty\frac{ (-1)^{j+k} \cos(b\log k)}{ \sqrt{k}}\right) + \frac{\sin(b\log{j})}{ \sqrt{j}} \left(\sum\limits_{\substack{k = 1 \\ k \neq j}}^\infty\frac{ (-1)^{j+k} \sin(b\log k)}{ \sqrt{k}}\right) \\\\ &= \frac{1}{j} - \frac{\cos(b\log{j})}{ \sqrt{j}} * \frac{\cos(b\log{j})}{ \sqrt{j}} + \frac{\sin(b\log{j})}{ \sqrt{j}}*\frac{-\sin(b\log{j})}{ \sqrt{j}} \\\\ &= \frac{1}{j} - \left(\frac{\cos^2 (b \log{j}) + \sin^2 (b \log{j})}{j} \right)\\\\ & = \frac{1}{j} - \frac{1}{j} \\\\ &= 0 \end{split} \end{align}

There is another proof as well. Remember $\eta(s) = x + iy$. We can easily show that the sum of any row is given by: \begin{align} S_j = \frac{(-1)^{j+1} x \cos(b \log{j}) + (-1)^{j+2} y \sin(b \log{j}) } {\sqrt{j}} \end{align} Hence when $x$ and $y$ are zero, sum of each row is zero. We can also verify that: \begin{align} \begin{split} S &= \sum\limits_{j=1}^\infty S_j \\\\ &= x \sum\limits_{j=1}^\infty\frac{(-1)^{j+1} \cos(b \log{j})}{\sqrt{j}} + y\sum\limits_{j=1}^\infty \frac{(-1)^{j+2} \sin(b \log{j})} {\sqrt{j}}\\\\ &= xx + yy \\\\ &= x^2 + y^2 \\\\ &= |\eta(s)|^2 \end{split} \end{align}

Final thoughts

  1. Not only is the oscillating term inside $|\eta(s)|^2$ offsetting the harmonic series but it also offsetting the harmonic series term by term. This is remarkable.
  2. Similar discussion can be had for the Dirichlet L series as well. We get:
    a. When $|L(s, \chi_3(j))|^2 = 0$, each term of the L series is individually offset. For example: \begin{align} \frac{\chi^2_3(k)}{k} = \sum\limits_{\substack{j = 1 \\\\ j \neq k}} \frac{ -\chi_3(jk)\cos(b\log{\frac{j}{k}})}{\sqrt{jk}} \end{align}
    b. When $|L(s, \chi_4(j))|^2 = 0$, each term of the L series is individually offset. For example: \begin{align} \frac{\chi^2_4(k)}{k} = \sum\limits_{\substack{j = 1 \\\\ j \neq k}} \frac{ -\chi_4(jk)\cos(b\log{\frac{j}{k}})}{\sqrt{jk}} \end{align}

Hales-Jewett Number $HJ(2,r)$

Posted: 13 Nov 2021 09:04 PM PST

I'm trying to find the exact value for the Hales-Jewett number $HJ(2,r)$, where $HJ(k,r)$ is defined as the smallest $n$ so that any coloring of the elements of $[k]^n$ by $r$ colors has a monochromatic combinatorial line.

It seems like a simple (maybe even trivial) problem, but I'm not sure how to proceed since I'm still trying to wrap my head around combinatorial lines.

Optimum of linear program from graph

Posted: 13 Nov 2021 09:05 PM PST

Given a connected (undirected) graph $G$ with vertex set $V$ of size at least $2$, we are allowed to put a real number $x_v$ on each $v\in V$. The constraint is that, for any $W\subseteq V$ such that the induced subgraphs on both $W$ and $V\setminus W$ are connected, $\displaystyle\left|\sum_{v\in W}x_v\right|\le 1$. We want to maximize $\displaystyle\sum_{v\in V} |x_v|$.

Is it true that the maximum is always an even integer, and there is a maximizing solution where all $x_v$'s are integers?

Examples: If $G$ is a path of length $n$, the maximum is $2n-2$, which occurs at $(1,-2,2,-2,\dots,-2,1)$ if $n$ is odd, and $(1,-2,2,\dots,2,-1)$ if $n$ is even. If $G$ is a cycle of length $n$, the maximum is $n$ if $n$ is even (occurs at $(1,-1,\dots,1,-1)$) and $n-1$ if $n$ is odd (occurs at $(1,-1,\dots,1,-1,0)$).

The constraints and objective can be written as a linear program by taking out absolute values, and some linear programming facts may be useful.

About the edge in graph

Posted: 13 Nov 2021 09:09 PM PST

If there is a cube, and I want to find the path with length 2 of a vertex of the cube to its antipodal vertex. However, can I use the diagonal of a plane to be one of the paths? enter image description here

Can we prove this function is constant on $[0, b)$?

Posted: 13 Nov 2021 09:38 PM PST

I tried to prove the following statement while working on another problem:

Claim: Let $f: [0,b) \rightarrow \mathbb{R}$ such that $f(0) = 0$. Assume that for any $\alpha \in [0,b)$, there exists a neighborhood of $\alpha$, $U_\alpha$, such that $f$ is constant on $U_\alpha$. Then $f(x) = 0$ for all $x \in [0,b)$.

Is this true? Or can we find a counterexample?

I tried defining an equivalence relation on $[0,b)$ by $x \sim y$ iff $f(x) = f(y)$ iff $f$ is constant on $U_x \cup U_y$. However, I couldn't make it work.

Is there some formal/rigorous way to prove the claim?

prove that $\int_0^\infty \dfrac{\sin^2 x-x\sin x}{x^3} dx= \frac{1}2 - \ln 2$

Posted: 13 Nov 2021 09:32 PM PST

Prove that $\int_0^\infty \dfrac{\sin^2 x-x\sin x}{x^3} dx = \frac{1}2 - \ln 2$.

Integration by parts gives \begin{align}\lim\limits_{R\to \infty}\int_0^R\dfrac{\sin^2 x-x\sin x}{x^3} dx &= \lim\limits_{R\to \infty} ( \int_0^R \frac{\sin^2x}{x^3}dx-\int_0^R \frac{\sin x}{x^2} dx)\\ &= \lim\limits_{R\to\infty} ( \frac{\sin^2 x}{-2x^2}\rvert_0^R - \int_0^R \frac{\sin (2x)}{-2x^2}dx-(-\frac{\sin x}{x} \rvert_0^R + \int_0^R \frac{\cos x}{x}dx))\\ &= \lim\limits_{R\to \infty} (\frac{1}2+\int_0^{2R} \frac{\sin u}{u^2/2}(\frac{1}2 du) -(1+ \int_0^R \frac{\cos x}xdx))\quad\text{ (using the substitution $u\mapsto 2x$)}\\ &=-\frac{1}2+\lim\limits_{R\to \infty}(\int_0^{2R} \frac{\sin u}{u^2}du - \int_0^R\frac{\cos x}xdx)\\ &= \frac{1}2 +\lim\limits_{R\to\infty}(\int_R^{2R} \frac{\cos x}x dx)\end{align}

Thus it suffices to show that $\lim\limits_{R\to\infty} \int_R^{2R} \frac{\cos x}x dx = \ln 2.$ The Taylor series expansion of $\cos x$ is given by $\displaystyle\cos x = \sum_{i=0}^\infty \dfrac{(-1)^i x^{2i}}{(2i)!}$.

(If the step below (the one involving the interchanging of an infinite sum and integral) is valid, why exactly is it valid? For instance, does it use uniform convergence?)

The limit equals $\displaystyle\lim\limits_{R\to\infty} \int_R^{2R} \frac{1}x + \sum_{i=1}^\infty \dfrac{(-1)^i x^{2i-1}}{(2i)!}dx = \ln2 + \lim\limits_{R\to\infty} \sum_{i=1}^\infty \left[\dfrac{(-1)^ix^{2i}}{(2i)(2i)!}\right]_R^{2R}.$ But I don't know how to show $\displaystyle\lim\limits_{R\to\infty} \sum_{i=1}^\infty \left[\dfrac{(-1)^ix^{2i}}{(2i)(2i)!}\right]_R^{2R} = -2\ln 2$.

Solving stochastic differential equation confused

Posted: 13 Nov 2021 09:32 PM PST

Solve the stoachastic differential equation:

$dX(t)= - \frac{X(t)}{(1+t)}dt + \frac{1}{(1+t)}dB(t)$ assuming a solution of the form $X(t)=g(t,B(t))$, where $g()$ is a $C^2$ function.

Im a bit confused on how to go about solving this. Below I have attached what Ive done so far but unsure how to form it to a $X(t)=g(t,B(t))$ solution.

How can I calculate distance between two points lying on cylinder surface of radius R.

Posted: 13 Nov 2021 09:24 PM PST

enter image description hereDear friends hope to be fine, I have a big cylinder of radius R in which I have two spherical particles of radius $a_1$ and $a_2$,placed along z axis of known potential. In Cartesian I define the condition for $\varphi$, as $\varphi = \sqrt{(r[i]-r_1)^2+(z[j]-z_1)^2}\leq a_1$ if $\varphi \leq a_1$ then $\varphi = \varphi _1$. and semilarly for second particle. Now if I have cylindrical particle instead of spherical.

Then how can I define this condition in cylindrical co-ordinates. or in simple word, I have two points on Cylinder surface $P_1(\rho,\phi_1,z_1)$ and $P_2(\rho,\phi_2,z_2)$. Now how can I calculate the distance between these two particles.
please can some one help me for understanding this problem?? Thanks in advance for your time!

From a standard 52 card deck, you draw two cards without replacement. What is the probability that both cards are red knowing at least 1 is red?

Posted: 13 Nov 2021 09:33 PM PST

When I use Bayes rule:

$P(2 red | 1 red) = \frac{(P(1 red | 2 red) P(2 red))}{P(1 red)}$,

$P(2 red | 1 red) = \frac{\big(\big(1\big) \big(\frac{26}{52}\big(\frac{25}{51}\big)\big)\big)}{ \big(\frac{26}{52}\big)}$,

$P(2 red | 1 red) = \frac{25}{51}$.

Is anything wrong with this?

Prove that $f(x)=x^2$ is integrable using superior and inferior sums

Posted: 13 Nov 2021 09:03 PM PST

I need to prove that $f:[-1,2]\rightarrow\mathbb{R}$ given by $f(x)=x^2$ is integrable using this theorem:

Let $f:[a,b]\rightarrow\mathbb{R}$ be limited. The following afirmations are equivalent:

(1) $f$ is integrable;

(2) for all $\varepsilon>0$, exists partitions $P$ and $Q$ from $[a,b]$ such that $S(f;Q)-s(f;P)<\varepsilon$;

(3) for all $\varepsilon>0$, exists a partition $P=\{t_0,\dots,t_n\}$ from $[a,b]$ such that

$$S(f;P)-s(f;P)=\sum_{i=1}^{n}\omega_i(t_i-t_{i-1})<\varepsilon.$$

$$s(f;P)=\sum_{i=1}^{n}m_i(t_i-t_{i-1}); S(f;P)=\sum_{i=1}^{n}M_i(t_i-t_{i-1});$$

$$m_i=\inf\{f(x);x\in[t_{i-1},t_i]\};M_i=\sup\{f(x);x\in[t_{i-1},t_i]\};\omega_i=M_i-m_i.$$

I got stuck in a part of the demonstration I tried:

Given $\varepsilon>0$, exists $n\in\mathbb{N}$ such that

$$\dfrac{1}{n}<\varepsilon\quad\mbox{ and }\quad\dfrac{2}{n}<2\varepsilon.$$

Let's take partitions $P_1$ and $P_2$ that refine $P_0=\{-1,0,2\}$ such that $$P_1=\left\{t_0=-1,t_1=-1+\dfrac{1}{n},t_2=-1+\dfrac{2}{n},\dots,t_n=-1+\dfrac{n}{n}=0\right\}\mbox{and}$$

$$P_2=\left\{t_0=0,t_1=0+\dfrac{2}{n},t_2=0+\dfrac{2(2)}{n},\dots,t_n=0+\dfrac{n(2)}{n}\right\}.$$

Note that for each interval $[t_{i-1},t_i]$ from $P_1$ we have that

$$m_i=f(t_i)=(t_i)^2\quad\mbox{and}\quad M_i=f(t_{i-1})=(t_{i-1})^2;$$

because $f$ strictly decreasing in $[-1,0]$.

Note, also, that for each interval $[t_{i-1},t_i]$ from $P_2$ we have that

$$m_i=f(t_{i-1})=(t_{i-1})^2\quad\mbox{and}\quad M_i=f(t_{i})=(t_{i})^2;$$

because $f$ is strictly increasing in $[0,2]$.

In this way,

$$S(f;P_2)-s(f;P_1)=$$

$$\left[\left(\dfrac{2}{n}\right)^2\left(\dfrac{2}{n}-0\right)+\left(\dfrac{4}{n}\right)^2\left(\dfrac{4}{n}-\dfrac{2}{n}\right)+\cdots+\left(2\right)^2\left(2-\dfrac{(n-1)(2)}{n}\right)\right]$$ $$-\left[\left(-1\right)^2\left(-1+\dfrac{1}{n}-(-1)\right)+\left(-1+\dfrac{1}{n}\right)^2\left(-1+\dfrac{2}{n}-\left(-1+\dfrac{1}{n}\right)\right)+\cdots+\left(-1+\dfrac{n-1}{n}\right)^2\left(0-\left(-1+\dfrac{n-1}{n}\right)\right)\right]$$

and that's it... I got stuck. What I was trying to do is simplify all of this to get that

$$S(f;P_2)-s(f;P_1)=\left(\dfrac{2}{n}\right)-\left(\dfrac{1}{n}\right)<2\varepsilon-\varepsilon=\varepsilon.$$

But I couldn't do it... anyone can help me?

If $G$ is finite and $H$ is the only subgroup of a given order, then $H$ is normal.

Posted: 13 Nov 2021 09:18 PM PST

If $G$ is finite and $H$ is the only subgroup of a given order, then $H$ is normal.

I have a proof idea that I'm not sure works or not:

By Lagrange's theorem, the order of $G$ is equal to the order of $H$ times the number of distinct left cosets of $H$. But every left coset of $H$ has the same number of elements as $H$, so the order of $H$ must be equivalent to the order of $G$. Hence, $H = G$, so $H$ is normal.

Does this work? I'm aware of the proof using the conjugation map, but was curious to know if this works or not. Thank you!

exponential functions, finding the power of the exponential without using logarithms.

Posted: 13 Nov 2021 09:25 PM PST

$16^{18} + 16^{18} + 16^{18} + 16^{18} + 16^{18 }= 4^x$

What is $x$ equal to? How can I solve this without using logarithms? this question is from my math book and is deemed as a challenge question, by the book terms we haven't learnt about logarithms yet.

Prove that $\int_{n}^{n+1}\frac{\log\left| 2\ \xi\left(\frac{1}{2}+it\right)\right|}{\frac{1}{4}+t^2}dt\leq 0$

Posted: 13 Nov 2021 09:12 PM PST

Question If $\xi$ is the Riemann xi function then prove that for fixed $n\in\mathbb{N}$ $$\int_{n}^{n+1}\frac{\log\left| 2\ \xi\left(\frac{1}{2}+it\right)\right|}{\frac{1}{4}+t^2}dt\leq 0$$ My try: Let $$I_n=\int_{n}^{n+1}\frac{\log\left| 2\ \xi\left(\frac{1}{2}+it\right)\right|}{\frac{1}{4}+t^2}dt $$ We prove that $I_n\leq 0$ by induction on $n$.

For the case $n=1$ we have $I_1\leq 0$.

Assume $I_k\leq 0$. Then I am unable to prove for $I_{k+1}$. Thank you for your time. Any hints are appreciated.

Can we have another set of inference rule which is simplier and equivalent (in the sense that they prove the same formulas)?

Posted: 13 Nov 2021 09:11 PM PST

I have some questions on inference rules in model theory.

Question 1) In (some books of) model theory, $(\psi \rightarrow \phi, \psi \rightarrow \forall x \phi)$ where $x$ is not free in $\psi$ is taken as an inference rule. I think that to take $(\phi,\forall x \phi)$ is much simplier than the one above. I intuitively see that they are equivalent. However, I could only prove that if we assume $\phi$, we can deduce $\forall x \phi$ using the first inference rule. Here is my work:

$\phi $ (just start with this)

$\phi \rightarrow (y=y \rightarrow \phi)$

$y=y \rightarrow \phi $

$y=y \rightarrow \forall x\phi $

$y=y$

$\forall x\phi $

However, I couldn't prove the the other direction. So, is my claim false?

Question 2) If we have $(A,B)$ as an inference rule, can't we prove $A\rightarrow B$ in general? Note: I know that for logical inference rules, if we have $(A,B)$ then $A\rightarrow B$ is a tautology and we have $(\emptyset , A\rightarrow B)$ as an inference rule. But I want to "prove" $A\rightarrow B$ using other inference rules (except rules of types $(\emptyset , tautology)$.

Convex n-sided polygons whose exterior angles expressed in degrees are in arithmetic progression

Posted: 13 Nov 2021 09:39 PM PST

If the exterior angles of a convex n-sided polygon, are all integers, expressed in degrees, are in arithmetic progression, how many values are possible for n?

The sum of all exterior angles has to be 360°.

So, the total number of factors of 360° are 24. Subtracting 2 factors ( 1 & 2 ), all the other factors can easily represent the number of sides of polygons.

So, on the whole, why can't be there 22 polygons having all exterior angles in AP, and integer?

The Rayleigh Expansions and Diffraction in Periodic Media

Posted: 13 Nov 2021 09:10 PM PST

Motivation/Background: The Rayleigh Expansions are well known in the electromagnetics community. They were derived by Lord Rayleigh and give a way of representing solutions to the Helmholtz equation with $\alpha$-quasiperiodic boundary conditions. As discussed in the first chapter of Electromagnetic Theory of Gratings by Roger Petit, they also provide a way of representing outgoing solutions as "propagating" (which are $p\in\mathcal{U}$ in the notation below) and incoming solutions as "evanescent" (i.e. exponential decay and are $p\not\in\mathcal{U}$ in the notation below).

Problem: Suppose we are given the following boundary value problem \begin{align} \Delta u + k^2 u &=0,\quad z>g(x),\tag{1a}\\ u(x+d,z)&=e^{i\alpha d}u(x,z),\tag{1b}\\ \end{align} where $(1a)$ is the Helmholtz equation and $(1b)$ is known as $\alpha$-quasiperiodicity. The quantity $g(x)$ is known as the surface where we consider solutions in the upper layer and use the notation $z$ (the height) > $g(x)$ (the surface). Lord Rayleigh deduced that solutions of this boundary value problem take the form

\begin{align} u(x,z)=\sum_{p=-\infty}^{\infty}a_pe^{i\alpha_px + i\gamma_p z}+ \sum_{p=-\infty}^{\infty}b_pe^{i\alpha_px - i\gamma_p z}, \tag{2} \end{align}

where

\begin{align} \alpha_p := \alpha + \left(\frac{2\pi}{d}\right)p, \quad \gamma_p:= \begin{cases} \sqrt{k^2-\alpha_p^2}, & p\in\mathcal{U},\tag{4} \\ i\sqrt{\alpha_p^2-k^2}, & p\not\in\mathcal{U}, \end{cases} \end{align} and \begin{align} \mathcal{U}:=\{p\in\mathbb Z ~|~ \alpha_p^2 < k^2\}\tag{5}. \end{align} The condition $(5)$ is also known as "propagating modes."

My Attempts: We split $(1\text{a})$ into a set of ordinary differential equations by considering $$u(x,z)=X(x)Z(z).$$ Substituting this product into the Helmholtz equation, we obtain $$X''Z + XZ'' + k^2XZ=0.$$ Dividing by $u = XZ$ and rearranging terms, we get \begin{align} \frac{X''}{X}= -k^2 - \frac{Z''}{Z}.\tag{6} \end{align} Equation $(6)$ exhibits one Separation of Variables. The left-hand side is a function of $x$ alone, whereas the right-hand side depends only on $z$ and not on $x$. But $x$ and $z$ are independent coordinates. The equality of both sides depending on different variables means that the behavior of $x$ as an independent variable is not determined by $z$. Therefore, each side must be equal to a constant which is known as the constant of separation. We choose \begin{align} \frac{X''}{X}&=\lambda^2,\tag{7} \\ -k^2 - \frac{Z''}{Z}&=\lambda^2.\tag{8} \end{align} Now, turning our attention to $(8)$, we obtain \begin{align} \frac{Z''}{Z} = -\left(\lambda^2 +k^2\right).\tag{9} \end{align} Our goal is to solve the ODEs $(7)$ and $(9)$ through our boundary conditions. Rearranging $(7)$ as $X''-\lambda^2 X=0$ and solving the auxiliary equation gives \begin{align} \quad X_n(x)=a_ne^{\lambda x} + b_ne^{-\lambda x}.\tag{10} \end{align} Similarly, we write $(9)$ as $Z'' + Z\left(\lambda^2+k^2\right)$ and solve the auxiliary equation to find \begin{align} \quad Z_n(z)=c_ne^{i\sqrt{\lambda^2+k^2}z} + d_ne^{-i\sqrt{\lambda^2+k^2}z}.\tag{11} \end{align} We first analyze $(10)$ and focus on the case where $x=0$. The boundary condition $(1\text{b})$ for quasiperiodic $u(x+d,z)=e^{i\alpha d}u(x,z)$ becomes $X(d)=e^{i\alpha d}X(0)$ which, after some simplification, delivers $$ \lambda = i\alpha.$$ Evaluating $(10)$ when $\lambda=0$ yields $$\alpha = \left(\frac{2\pi n}{d}\right),\quad n\in\mathbb Z.$$ Therefore, the auxiliary equation $(10)$ becomes \begin{align} \quad X_n(x)=a_ne^{i\alpha x} + b_ne^{-i\alpha x}.\tag{12} \end{align} Next, we analyze the auxiliary equations when $z=0.$ The boundary condition $(1\text{b})$ becomes $X(x+d)=e^{i\alpha d}X(x)$ which doesn't simplify $(11)$ or $(12)$. Inserting $\lambda=i\alpha$ into $(11)$ forms \begin{align} \quad Z_n(z)=c_ne^{i\sqrt{k^2-\alpha^2}z} + d_ne^{-i\sqrt{k^2 -\alpha^2}z}.\tag{13} \end{align} By $(4)$, we see that $(13)$ is equivalent to (this is clearly wrong since $\alpha \neq \alpha_p$. However, I don't see what I did wrong.) \begin{align} \quad Z_n(z)=c_ne^{i\gamma_p z} + d_ne^{-i\gamma_p z}.\tag{14} \end{align} By $(12),(14),$ the superposition principle, and replacing the integer $n$ by $p$, the general solution is \begin{align*} u(x,z)&=\sum_{p=-\infty}^{\infty}\Big( a_pe^{i\alpha x} + b_pe^{-i\alpha x}\Big)\Big(c_pe^{i\gamma_p z} + d_pe^{-i\gamma_p z}\Big)\\&= \sum_{p=-\infty}^{\infty}e_pe^{i\alpha x + i\gamma_p z} + \sum_{p=-\infty}^{\infty}f_pe^{i\alpha x - i\gamma_p z}. \end{align*} My Concerns: In step $(13)$ I should have obtained \begin{align} \quad Z_n(z)=c_ne^{i\sqrt{k^2-\alpha_p^2}z} + d_ne^{-i\sqrt{k^2 -\alpha_p^2}z},\tag{15} \end{align} where (as defined by $(4)$)

$$\alpha_p := \alpha + \left(\frac{2\pi}{d}\right)p.$$

Then the general solution would be

\begin{align*} u(x,z)&=\sum_{p=-\infty}^{\infty}\Big( a_pe^{i\alpha_p x} + b_pe^{-i\alpha_p x}\Big)\Big(c_pe^{i\gamma_p z} + d_pe^{-i\gamma_p z}\Big)\\&= \sum_{p=-\infty}^{\infty}e_pe^{i\alpha_p x + i\gamma_p z} + \sum_{p=-\infty}^{\infty}f_pe^{i\alpha_p x - i\gamma_p z}, \end{align*} and would match what Lord Rayleigh found. I don't see how to obtain this result by Separation of Variables with the given boundary condition.

Proof that the Shapley value satisfies the balanced contribution axiom

Posted: 13 Nov 2021 09:24 PM PST

I have read that Shapley values satisfy the balanced contribution axiom (e.g., p. 6).

However, I have not been able to show that this is indeed the case. I tried to prove it using the definition of the Shapley value: $$ \phi_{i}(v)-\phi_{i}(v \backslash j)=\phi_{j}(v)-\phi_{j}(v \backslash i) \\ \sum_{\mathcal{C} \subseteq N \backslash\{i\}} \frac{|\mathcal{C}| !(n-|\mathcal{C}|-1) !}{n !}(v(\mathcal{C} \cup\{i\})-v(\mathcal{C})) - \sum_{\mathcal{C} \subseteq N \backslash\{i, j\}} \frac{|\mathcal{C}| !(n-|\mathcal{C}|-1) !}{(n -1)!}(v(\mathcal{C} \cup\{i\})-v(\mathcal{C})) = \sum_{\mathcal{C} \subseteq N \backslash\{j\}} \frac{|\mathcal{C}| !(n-|\mathcal{C}|-1) !}{n !}(v(\mathcal{C} \cup\{j\})-v(\mathcal{C})) - \sum_{\mathcal{C} \subseteq N \backslash\{i, j\}} \frac{|\mathcal{C}| !(n-|\mathcal{C}|-1) !}{(n-1) !}(v(\mathcal{C} \cup\{j\})-v(\mathcal{C})) $$

But the only thing that can be reduced seems to be the $v(C)$ in the sum over $N \setminus \{i, j\}$. Is this the wrong approach?

Lower bound for isometric immersions

Posted: 13 Nov 2021 09:02 PM PST

I just read Azov's article in the considered two classes of Riemannian metrics, \begin{align*} ds^2&=du_1^2+f(u_1)\sum_{i=2}^ldu_i,&f>0\\ ds^2&=g^2(u_1)\sum_{i=2}^ldu_i^2 ,&g>0\end{align*} and solved the problem of their immersibility into $\mathbb R^n$ and $\mathbb S^n$, $n>l$. He proves the following theorems

  • The metrics admit isometric immersions into the Euclidean space $\mathbb R^{4l-3}$.
  • The metrics admit isometric immersions into the Euclidean space $\mathbb S^{4l-3}$.

But in Surfaces of Negative Curvature of Rozendorn, he mentions the following enter image description here I can't understand how Rozendorn comes to that conclusion, can someone help me? On the other hand, is there already an improvement for that lower bound for isometric immersions?

This question is crossed with: https://mathoverflow.net/q/407095/171387

Why is this limit $\lim_{x\to \infty}x^2-x^2\cdot \cos\left(\frac{1}{x}\right)$ not $0$?

Posted: 13 Nov 2021 09:25 PM PST

I have this limit: $$ \lim_{x\to \infty}x^2-x^2\cdot \cos\left(\frac{1}{x}\right) $$

For me my initial answer would be zero as: $$ \lim_{x\to \infty}x^2-x^2\cdot \cos\left(\frac{1}{x}\right)=\lim_{x\to \infty}x^2-\lim_{x\to \infty}x^2\cdot\lim_{x\to \infty}\cos\left(\frac{1}{x}\right) $$ Which is: $$ \infty-\infty\cdot1=0 $$ But after looking at wolfram alpha and doing a series expansion of $\cos(x)$ i see that the answer is in fact $1/2.$

Why is my original thinking incorrect?

No comments:

Post a Comment