Sunday, August 22, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Hartshorne $II, 6.9$, Degree of pull back map

Posted: 22 Aug 2021 08:43 PM PDT

I am studying Hartshorne's Algebraic geometry and got struck in understanding the proof of Proposition $6.9$ Chapter II, Page-$138$ where it is proved that for a finite morphism $f: X \to Y$ between non-singular curves, deg $f^{*} Q =$ deg $f$ for a closed point $Q$ of $Y$. Here $A^{\prime}$ is the free $\mathcal{O}_Q$-module of rank $r=[K(X):K(Y)]$. Fibres of $Q$ are $P_1, P_2,..,P_n$ that corresponds to maximal ideals $m_1,..,m_n$. Here I don't understand the following:
1.Why the ideals $tA^{\prime}_{m_i} \cap A^{\prime}$ are comaximal so that we can apply Chinese remainder theorem?
2.Why $\cap(tA^{\prime}_{m_i} \cap A^{\prime})= tA^{\prime}$?
3.Why $A^{\prime}/(tA^{\prime}_{m_i} \cap A^{\prime}) \cong A^{\prime}_{m_i}/tA^{\prime}_{m_i}$?
Thanks in advance. Any help or suggestion in undering this proof is most welcomed.

Can we exchange the orders of parameter updating in K-means algorithm?

Posted: 22 Aug 2021 08:35 PM PDT

In almost all references, they said we can update parameters in K-means algorithm through an iterative procedure:

  1. Set $\mu_k$ initial values
  2. Update $r_{nk}$ while fixing $\mu_k$
  3. Update $\mu_k$ while fixing $r_{nk}$

Iteratively run step 2 and step 3 until converge. So, I wondering can we exchange the updating orders? First we set $r_{nk}$ initial values and update $\mu_k$ firstly $r_{nk}$ secondly. Are there problems in doing like this? Thanks.

Condition for system of quadratic equations to have atleast one solution

Posted: 22 Aug 2021 08:33 PM PDT

ax^2 +bx +cm =0 , bx^2 + cx +am =0 and cx^2 + ax +bm=0 are three quadratic equations in x , a,b ,c are real numbers and m is a positive real , find the possible numerical values of m so that atleast one of these equations has a real root.

How do I attempt such a question? What is the intuition behind this?

I don't get where to start. Can someone help me out?

Thanks in advance.

Compute $\frac 17$ in $\Bbb{Z}_3$

Posted: 22 Aug 2021 08:27 PM PDT

Compute $\frac 17$ in $\Bbb{Z}_3.$

We will have to solve $7x\equiv 1\pmod p,~~p=3.$

  • We get $x\equiv 1\pmod 3.$
  • Then $x\equiv 1+3a_1\pmod 9,$ so $7(1+3a_1)\equiv 1 \pmod 9$ basically lifting the exponent of $p=3,$ we get $1+3a_1\equiv 4\pmod 9\implies a_1\equiv 1\pmod 3.$
  • So let $$x\equiv 1+3\cdot 1+3^2\cdot a_2 \pmod 2\implies 7(4+3^2\cdot a_2)\equiv 1\pmod {27}\implies 4+3^2\cdot a_2\equiv 4\pmod {27}\implies a_2\equiv 0 \pmod 3.$$
  • So let $$ x\equiv 1+3\cdot 1+3^2\cdot 0+ 3^3\cdot a_3 \pmod {81}\implies 7(4+3^2\cdot 0+3^3\cdot a_3)\equiv 1\pmod {81}\implies 4+3^3\cdot a_3\equiv 58\pmod {81}\implies a_2\equiv 2 \pmod 3.$$
  • So let $$ x\equiv 1+3\cdot 1+3^2\cdot 0+ 3^3\cdot 2+3^4\cdot a_4 \pmod {243}\implies 7(4+3^2\cdot 0+3^3\cdot 2+3^4\cdot a_4)\equiv 1\pmod {243}\implies 1+3+54+3^4\cdot a_4\equiv 139\pmod {243}\implies a_4\equiv 1 \pmod 3.$$
  • So let $$ x\equiv 1+3\cdot 1+3^2\cdot 0+ 3^3\cdot 2+3^4\cdot 1+ 3^5\cdot a_5 \pmod {729}\implies 7(4+3^2\cdot 0+3^3\cdot 2+3^4\cdot 1+3^5\cdot a_5)\equiv 1\pmod {729}\implies 1+3+54+81\equiv 625\pmod {243}\implies a_5\equiv 2 \pmod 3.$$

I haven't worked out but I think $a_6$ is $0.$

So the sequence we are getting is $(a_0,a_1,a_2,a_3,a_4,\dots)=(1,1,0,2,1,2,\dots).$

But I am not sure if it's correct, since it's not being periodic. Any help?

Why is the birthday problem not so surprising?

Posted: 22 Aug 2021 08:27 PM PDT

I'm reading Blitzstein/Hwang's "Introduction to Probability". Here, they give an explanation as to why the conclusion obtained in the birthday problem is not so surprising:

enter image description here

I don't quite understand how this number of people makes it not so surprising. Could you expand a little bit?

Does $(x*w)$ denote the resulting composite function, given $*$ denotes the convolution operation?

Posted: 22 Aug 2021 08:26 PM PDT

Chapter 9.1 of the Deep Learning book gives this formula

$s(t)=(x*w)(t) \tag{(9.2)}$

where the operator $*$ denotes the convolution operation.

I'm conscious what the convolution operation is, I just have difficulty understanding the notation.

Does $(x*w)$ denote the resulting composite function which takes $t$ as its independent variable?

Here is the page in question

enter image description here

Role of a witness in forcings that preserve a relation on the reals

Posted: 22 Aug 2021 08:43 PM PDT

I'm working through Martin Goldstern's "Tools for Your Forcing Construction" and am confused about definition 5.11.

First some notation:

  • $\mathbf C\subseteq{}^\omega\omega$ is some closed set,
  • $\langle{\sqsubset_n}\mid n\in\omega\rangle$ is a sequence of binary relations on ${}^\omega\omega$
  • ${\sqsubset}=\bigcup_{n\in\omega}{\sqsubset_n}$, which is therefore also a relation on ${}^\omega\omega$

We furthermore assume that each of the above sets can be defined by some nice enough formula. Given some model $\mathcal N$, we say that $g\in{}^\omega\omega$ covers $\mathcal N$ if for all $f\in\mathbf C\cap\mathcal N$ we have $f\sqsubset g$. Given some chain of conditions $\langle p_n\mid n\in\omega\rangle\in{}^\omega\Bbb Q$, a tuple of $\Bbb Q$-names for reals $(\dot f_0,\dots,\dot f_k)$ and a tuple of reals $(f_0^*,\dots,f_k^*)$, we say that $\langle p_n\mid n\in\omega\rangle$ interprets $(\dot f_0,\dots,\dot f_k)$ as $(f_0^*,\dots,f_k^*)$ if for each $i\leq k$ and all $n$ we have $p_n\Vdash \dot f_i\restriction n=f_i^*\restriction n$. We let $\chi$ be a large ordinal such that everything relevant to us happens in the hereditary universe $\mathbf H(\chi)$.

My confusion is about the role of $x$ in following definition:

Definition 5.11. We say that a forcing $\mathbb Q$ preserves $\sqsubset$ if for some $x$ (called the "witness") and all countable elementary submodels $\mathcal N\prec\mathbf H(\chi)$ with each of $\mathbb Q,x,\sqsubset$ in $\mathcal N$ for which there is $g$ that covers $\mathcal N$ and for any chain of conditions $\langle p_n\mid n\in\omega\rangle\in\mathbb {}^\omega \Bbb Q\cap\mathcal N$ interpreting $(\dot f_0,\dots,\dot f_k)$ as $(f^*_0,\dots,f^*_k)$ where each $f^*_i\sqsubset_{n_i} g$, there exists an $\mathcal N$-generic condition $q$ stronger than $p$ [sic] such that "$q\Vdash g\text{ covers }\mathcal N[G]$" and for each $i\leq k$ we have $q\Vdash \dot f_i\sqsubset_{n_i}g$.

(I believe that where it says "$q$ is stronger than $p$", it should say "$q$ is stronger than $p_0$")

The witness $x$ does not seem to be used in a meaningful way anywhere in the paper after this, and I'm not aware of its role in any other preservation results I have seen. It clearly puts a limit on which models $\mathcal N$ we want to consider, but I'm not seeing how this is used in practice. If we simply set $x$ to be something absolute, like $\varnothing$, then it does not really harm the definition either.

Furthermore, such a witness is absent in the definition of "almost preserving" (definition 5.5). Because of this, I'm not sure how to conclude that "preserving" implies "almost preserving".

Square root of negative iota

Posted: 22 Aug 2021 08:29 PM PDT

What will be the square root of negative iota

Will it be iota multiplied by root iota, or something different?

I have attatched my thinking in this image:

A discrete time previsible martingale is constant; how can we prove it?

Posted: 22 Aug 2021 08:27 PM PDT

Prove that

A discrete time previsible martingale is constant.

From the law of martingale, every martingale is previsible.

How can a discrete martingale be constant?

Construct an alternating series such that $\sum (-1)^n a_n$ converge but $\sum (-1)^n a_n^3$ diverge where $a_n>0$

Posted: 22 Aug 2021 08:20 PM PDT

if I write it as the follow:

$a_1-a_2+a_3-a_4...$ converge but $a_1^3-a_2^3+a_3^3-a_4^3...$ diverge

What I have thought is that by making the odd term($a_{2k+1}^3$) a diverge series like $1,\frac{1}{3}, \frac{1}{5}...$, and the even term($a_{2k}^3$) as something converge with a smaller order such as $\frac{1}{2^3},\frac{1}{4^3}...$, then $\sum (-1)^n a_n^3=1-\frac{1}{2^3}+\frac{1}{3}-\frac{1}{4^3}...$ would be a diverge series, but I can not come up with a satisfactory result to make the original series converge

Hope you could help 🙏

Solve differential equation $\ddot{x} = -a \frac{x}{|x|^3}$

Posted: 22 Aug 2021 07:54 PM PDT

Came across this in a physics problem. Let $x$ be a plane vector dependent on time $t$. Let $\ddot{x}$ be the second order derivative of $x$ against $t$. Let $|x|$ be the Euclidean norm of vector $x$ Solve

$\ddot{x} = -a \frac{x}{|x|^3}$

I'm trying to start by solving this as an ordinary differential equation $y^{''} = -a \frac{y}{|y|^3}$ where $y$ is a scalar. But even this ODE is quite difficult to handle, especially the $|y|$ part.

Is there a general method for solving a linear homogeneous ordinary differential equation?

Posted: 22 Aug 2021 07:46 PM PDT

I am wondering if there is a general method for finding the solutions of a linear ODE of the form :

$Ly=0$

Note : the coefficients of the homogenous LODE can be variable

Is there a method to factor equations with two variables raised to the second power?

Posted: 22 Aug 2021 08:23 PM PDT

I found the equation $2b^2-ab-a^2=0$ on a problem and couldn't find a way to factor it. Is there any method to factor these types of equations?

Solving $y'=\frac{xye^{\frac xy}+y^2}{x^2 e^{\frac xy}-y^2}$

Posted: 22 Aug 2021 07:56 PM PDT

I have the ordinary differential equation $y'=\dfrac{xye^{\frac xy}+y^2}{x^2 e^{\frac xy}-y^2}$ Which can be written as $\dfrac{dx}{dy}=\dfrac{(\frac xy)^2 e^{\frac xy}-1}{\frac xye^{\frac xy}+1}$. By using the substitution $v=\frac xy$ we have,

$$v'y+v=\dfrac{v^2e^v-1}{ve^v+1}\quad\Rightarrow\quad y\frac {dv}{dy}=\dfrac{-v-1}{ve^v+1}\quad\Rightarrow\quad -\frac{dy}y=\dfrac{ve^v+1}{v+1}dv$$ Hence $\int\dfrac{ve^v+1}{v+1}dv=-\ln|y|+c$. So I have to evaluate the integral,

$$\int\dfrac{ve^v+1}{v+1}dv=\int \frac1{v+1}+\frac{(v+1)e^v-e^v}{v+1}dv=\ln|v+1|+e^v-\int\frac{e^v}{v+1}dv$$

But I don't know how to evaluate the last integral. According to this post I can rewrite it in the form $\frac1e\int\dfrac{e^t}tdt$ which still doesn't have a closed-form. On the other hand the book that I'm reading provide only the final solution for the ODE which is $(\frac xy-1)e^{\frac xy}=c-\ln y$. But I don't know how to get this answer and what am I missing in my approach.

Prove a inequality property of convex function

Posted: 22 Aug 2021 07:47 PM PDT

Problem: Let $f: \mathbb{R}^n \to \overline{\mathbb{R}}$. Let $x_1,x_3 \in E$ (Euclidean space in $\mathbb{R}^n$) and $x_2 \in (x_1,x_3)$. Prove that $$\dfrac{f(x_3) -f(x_2)}{\Vert x_3-x_2 \Vert} \ge\dfrac{f(x_2)-f(x_1)}{\Vert x_2-x_1\Vert}.$$

My attempt: Since $x_2 \in (x_1,x_3)$ then there exists $t \in (0,1)$ such that $x_2 = tx_1 + (1-t)f(x_3)$. Thus we have $$f(x_3) - f(x_2) \ge f(x_3)-tf(x_1)-(1-t)f(x_3) = tf(x_3)-tf(x_1).$$ Therefore $$\dfrac{f(x_3)-f(x_2)}{\Vert x_3-x_2\Vert} \ge \dfrac{t}{\vert t \vert}\dfrac{f(x_3)-f(x_1)}{\Vert x_3-x_1\Vert} \ge -\dfrac{f(x_3)-f(x_1)}{\Vert x_3-x_1\Vert} = \dfrac{f(x_1)-f(x_3)}{\Vert x_1-x_3\Vert}.$$

I have tried many different ways to have a result like the solution above but I failed. I wonder that the problem is right or not?

Derivatives of definite integrals

Posted: 22 Aug 2021 07:45 PM PDT

In my textbook it has two problems: $$\frac{d}{dx}\int^x_2\sin(t^2)dt$$ and $$\frac{d}{dx}\int^{\cos(x)}_0e^{t^2}dt$$

It says that the first derivative is (obviously) equal to $sin(x^2)$ and then gives the more complicated answer to the latter as $-\sin(x)e^{\cos^2(x)}$

How should I interpret problems such as these and how should I go about trying to solve them? Why is only $x$ considered in the first derivative and not $2$ also, and how do they arrive at the second answer?

Existence of a non-zero vector

Posted: 22 Aug 2021 07:42 PM PDT

For integers $i \geq 1$, let fixed real constants $a_i, b_i, c_i, d_i$ be given, such that $|a_i|+|b_i|\neq0$ and $|c_i|+|d_i|\neq0.$ Do there necessarily exist sequences $s_i$ and $t_i$ of real numbers, not both zero sequences, such that $s_i=a_is_{i+1}+b_it_{i+1}$ and $t_i=c_is_{i+1}+b_it_{i+1}$ for every $i$?

At first, I thought the answer is yes, however when I tried to justify the answer I could not do it. The difficulty of this is the infinite recurrent definition of $s_1$ and $t_1$.

I thought of using Zorn's Lemma and the Axiom of choice however I failed to set up the problem in those settings. Maybe $z$ is not well defined, although I would be surprised.


Question before editing:

Setting of the question: For each $n\in\mathbb{N}$, let $s_n$ and $t_n$ be real variables. Also, for each $m\in\mathbb{N}$ consider the vector $$w_m=((s_1,t_1),(s_2,t_2),\dots,(s_m,t_m))\in\left(\mathbb{R}^2\right)^m$$ where the variables $(s_i,t_i)$ are non-trivial linear combinations of $(s_{i+1},t_{i+1})$ for all $i$. This is, there are constants $a_i,b_i,c_i,d_i\in\mathbb{R}$ such that $s_i=a_is_{i+1}+b_it_{i+1}$ and $t_i=c_is_{i+1}+b_it_{i+1}$ where $|a_i|+|b_i|\neq0$ and $|c_i|+|d_i|\neq0$ for all $i$.

Question: Does a non-zero vector of the form $$z=((s_1,t_1),(s_2,t_2),\dots,(s_m,t_m),(s_{m+1},t_{m+1}),\dots)$$ exists in $\left(\mathbb{R}^2\right)^\infty$? (Here the components of $z$ must satisfy the following condition: the variables $(s_i,t_i)$ are non-trivial linear combinations of $(s_{i+1},t_{i+1})$ for all $i$.)

Solving a "messy" constrained optimization problem

Posted: 22 Aug 2021 08:06 PM PDT

Let $N\in\{2,3,...\}$ and arbitrarily fix $(\alpha_i,\beta_i,\gamma_i,\delta_i)\in(0,1]^4$ for each $i\in\{1,...,N\}$. Then consider the following constrained optimization problem:

\begin{cases} \max\limits_{x_{1},...,x_N} & \sum_{i=1}^N \frac{1}{1+\frac{\alpha_i x_i + \gamma_i}{\beta_i x_i + \delta_i}}x_i\\ \ \hfill\ \ \ \ \text{s.t.} & x_i\in[0,1]\ \ \forall i\in\{1,...,N\}\, .\\ & \sum_{i=1}^Nx_i =1 \end{cases}

I am having a lot of difficulty in making a dent in solving this. I have tried mathematical induction, but to no avail. Trying to "directly" solve this also seems foolhardy. If anyone knows how to solve this, I would really appreciate the help!

Edit: I am interested analytically deriving the solution to the above optimization problem. (If a closed-form solution is possible, even better!) Many thanks to Max below for their clarifying question.

evaluating $\Gamma(1/2)$ using elementary methods

Posted: 22 Aug 2021 07:41 PM PDT

Can we prove $\Gamma(1/2)$=$\sqrt{\pi}$ using elementary techniques . I can easily prove properties of gamma functions like $\Gamma(n+1)$=n$\Gamma(n)$and value of $\Gamma(1)$ using integration by parts . Most of the proves i saw used the integral $\int_0^\infty e^{-x^2}$ , wheres to solve this integral i substitute $x^2$=t and then get the answer as (1/2)$\Gamma(1/2)$=$\sqrt{\pi}/2$ becuase i am a high school student and know only elementary methods to solve integrals

Non conventional methods /some other beautiful methods (which can be elementary or near to elementary) to evaluate this are appreciated .

What is the meaning of $i^{\sigma ^{−1}}$?

Posted: 22 Aug 2021 08:09 PM PDT

A string $x$ is a map $\Omega \rightarrow \Sigma$ from a finite set $\Omega$ of positions to a finite set $\Sigma$ of letters, the alphabet.

Let $\Sigma^\Omega $ denote the set of all strings. In examples we will write strings simply as chains of characters, e.g. $x = raspberry$ for $\Omega = \{1, \cdots 9\}$ and the lower case English alphabet$\Sigma= \{a, \cdots z\}$ .

The action of $\text{Sym}(\Omega)$ on $\Omega$ induces an action on $\Sigma^\Omega $.

For $ \sigma \in \text{Sym}(\Omega)$ and $x \in \Sigma^\Omega $, the string $x^\sigma $ is defined in this document (page 4) by:

$x^\sigma (i) = x(i^{\sigma ^{−1}})$ for all $i \in \Omega$.

What is the meaning of $i^{\sigma ^{−1}}$?

Is $i$ the $i^{th}$ position in $x$? For example, if $x = raspberry$ then $x(2)=a$? Then if $\sigma = (12)$, $x^\sigma (2)=arspberry$, right? then why the author used the inverse of $\sigma$ on $i$ in $x(i^{\sigma ^{−1}})$?

It would be natural to use $i^{\sigma}$ instead of $i^{\sigma ^{−1}}$, though author mentions, that "This twist is necessary in order to have .." I don't understand what author meant here.

Understanding a proof of uncountably of $\mathbb{R}$ using decimal expansions

Posted: 22 Aug 2021 07:43 PM PDT

I am trying to understand this proof that $\mathbb{R}$ is uncountable.

Proceeding by contradiction, suppose there exists a surjection $f: \mathbb{N} \to \mathbb{R}$. Consider the possible decimal expansions of $f(n)$ for any $n \in \mathbb{N}$; there may be multiple. Consider the $n$th place after the decimal point, and let $S_n$ be the set of integers that appear in this $n$th place among the possible decimal expansions. Having fixed an $n$, $S_n$ has at most two elements. Now, for each $n$, pick $a_n$ satisfying $0 \leq a_n \leq 9$, an integer not in $S_n$. We can do that as $|S_n| \leq 2$. Now consider $A = 0.a_1 a_2 a_3 \ldots$ Then $a \neq f(n)$ for any $n \in \mathbb{N}$ as it differs from $f(n)$ in the $n$th decimal place, so $a \not \in \mathrm{Im}(f)$. So $f$ is not a surjection.

There are two things I don't fully understand.

(1) How do we know there are only two possible digits in the $n$th decimal place? I cannot figure out how to prove this, but I know "intuitively" that there's can be an expansion ending in an string of $9$'s and an expansion ending in a string of $0$'s.

(2) This proof does not consider binary expansions, so why am I only required to consider the digits after the decimal point? Is it enough to simply differ in these positions to ensure $a \not \in f(\mathbb{N})$?

UPDATE: Proposed proof of (1):

In fact, every real number has at most two decimal expansions. There are three exhaustive possibilities. First, if $x \in \mathbb{R}$ terminates, then decrementing the last digit and appending an infinite string of $9$'s gives an alternate decimal expansion of $x$. Second, if the decimal expansion of $x$ ends with an infinite string of $9$'s, then we can find another decimal expansion by the deleting the sequence of $9$s and incrementing the final digit. Third and finally, if the decimal expansion of $x$ fails to terminate without an infinite string of $9$s, it has only a single, unique decimal expansion.

Square of a binary matrix

Posted: 22 Aug 2021 08:32 PM PDT

I want to know how many binary matrices (with entries 0 or 1) $A=[a_{ij}]_3$ exist such that $b_{ij}\geq a_{ij}, \forall i,j=1,2,3$, where $B=A^2=[b_{ij}]_3$.

I have tried the same for the matrix of order 2 and came up with answer 13.

Rate of Increase of Area of Circle / Volume of Sphere [duplicate]

Posted: 22 Aug 2021 07:53 PM PDT

The rate of change of area of a circle with respect to change in its radius is equal to its circumference, as shown below: $$\frac{dA}{dr} = \frac{d}{dr}(\pi r^2) = 2\pi r$$

The rate of change of volume of a sphere with respect to change in its radius is equal to its surface area: $$\frac{dV}{dr} = \frac{dV}{dr}(\frac{4}{3}\pi r^3) = 4\pi r^2$$

Is there a geometrical/mathematical explanation for this?

Looking for an approximation of $\left|E_{2 n-1}(1)-E_{2 n-1}(0)\right|$ (Euler polynomials)

Posted: 22 Aug 2021 07:35 PM PDT

Trying to answer this question, I face the problem of approximating

$$a_n=\Big|E_{2 n-1}(1)-E_{2 n-1}(0)\Big|$$ in order to be able to compute a priori the value of $n$ such that $$\frac{\pi ^{2 n+3} \Big|E_{2 n-1}(1)-E_{2 n-1}(0)\Big|}{16 (2 n+3)!}\leq \epsilon$$

From definition $$E_{2 n-1}(1)-E_{2 n-1}(0)=4^{1-n}\, \Gamma(2n)\,\sum_{k=0}^n \frac{E_{2 k}}{\Gamma (2 k+1) \Gamma (2 n-2 k)}$$

Empirically, it seems that $$\log\left(\Bigg|\sum_{k=0}^n \frac{E_{2 k}}{\Gamma (2 k+1) \Gamma (2 n-2 k)} \Bigg|\right) \sim \log(2) + 2 \log \left(\frac{2}{\pi }\right)\,n=\log \left(2^{2 n+1} \pi ^{-2 n}\right) \tag 1$$

For a check, a quick and dirty linear regression (for $n=1$ to $n=1000$) gives (with $R^2 =0.99999999984$ !)

$$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ \alpha & +0.6940519 & 4.21\times 10^{-4} & \{+0.6932257,+0.6948781\} \\ \beta & -0.9031668 & 7.29\times 10^{-7} & \{-0.9031682,-0.9031653\} \\ \end{array}$$

Comparing $$a_n=\Big|E_{2 n-1}(1)-E_{2 n-1}(0)\Big|\qquad \text{and} \qquad 8 \pi ^{-2 n} \Gamma (2 n)$$ the relative error is lower than $1.7 \times 10^{-3}$% as soon as $n \geq 5$ and lower than $2.9 \times 10^{-8}$% as soon as $n \geq 10$.

If $(1)$ is true (or close to be true), the problem would reduce to find $n$ such that $$\frac{\pi ^3 \,\Gamma (2 n)}{2 \Gamma (2 n+4)}\leq \epsilon$$ which would lead to $$n=\left\lceil \frac{1}{4} \left(\sqrt{5+2 \sqrt{4+\frac{2 \pi ^3}{\epsilon }}}-3\right)\right\rceil$$

Any idea or suggestion would be more than welcome.

Evaluate $\int_0^\infty \sum^{\lfloor x\rfloor}_{n=1}\frac{\sin nx\pi}{x^n}dx$

Posted: 22 Aug 2021 08:36 PM PDT

Evaluate $$\int_1^\infty \sum^{\lfloor x\rfloor}_{n=1}\frac{\sin nx\pi}{x^n}dx$$ where $\lfloor \cdot \rfloor$ is the floor function.

Okay so this is a problem that really stumbled me. I've had quite a few attempts/thoughts at it though. Tried expanding first to get a sense on what is going on: $$\int_1^\infty\left(\frac{\sin x\pi}{x}+\frac{\sin 2x\pi}{x^2}+\frac{\sin 3x\pi}{x^3}+\dots+\frac{\sin(\lfloor x\rfloor x\pi)}{x^{\lfloor x\rfloor}}\right)dx$$ then I thought I could try individualizing the integral into each? Not sure if that's even possible. And then realised that the solutions to those integrals isn't elementary, they use the $Si$ function. But yeah I'm pretty stuck and unsure if what I've done is even possible.

Discriminant of algebraic number field is $0$ or $1 \bmod 4$

Posted: 22 Aug 2021 08:36 PM PDT

I am reading the following theorem of Ludwig Stickelberger in Introductory Algebraic Number Theory written by S. Alaca and Kenneth S. Williams

Let $K$ be an algebraic number field of degree $n$. Then the discriminant $d(K)$ of $K$ satisfies $d(K) \equiv 0 \text{ or } 1 \bmod 4$

To show this,
Let $\{w_1,w_2, \ldots,w_n \} $ be an integral basis for $K$. Let $\{w_i^{(j)} \, \vert \, j = 1,2 , \ldots n \}$ be the set of $K$ - conjugates of $w_i$ with $w_i^{(1)} = w_i$ for $i = 1,2 , \ldots ,n$ .

A fairly simple induction proof is sufficient to show that the expansion of determinant of $\det = \begin{vmatrix} w_1^{(1)} && w_2^{(1)} && \cdots && w_n ^{(1)} \\ \vdots && \cdots && \cdots &&\vdots\\ w_n^{(1)} && w_2^{(n)} && \cdots && w_n^{(n)} \end{vmatrix}$

contains $n!$ terms, half of which occur with positive signs and half with negative signs.

Let the sum of those with positive signs be $\lambda$ and those with negative signs $\mu$ so that $$ \det = \lambda - \mu$$

Let $A = \lambda + \mu$ and $B= \lambda \mu$, then $d(K)= \det^2 = (\lambda - \mu)^2 = A^2 - 4B$

It is easy to see that $w_i^{(j)}$ is an algebraic integer and so does $\lambda$ and $\mu$ since the set of algebraic integers ($\Omega$) form integral domain. For the same reason, we see that $A, B \in \Omega$.

Now the aim is to show that $A$ and $B$ are rational numbers. We note that there exist $\theta \in \Omega$ such that $K = \mathbb{Q}(\theta)$ and let $\theta_1 = \theta, \theta_2, \ldots ,\theta_n$ be conjugates of $\theta$ over rational numbers. Then the author says

If we express each $w_j$ as a polynomial in $\theta$ with rational coefficients, $A$ becomes a symmetric function of $\theta_1, \cdots , \theta_n$ with rational coefficients and so $A \in \mathbb{Q}$

I know the author is using the fundamental theorem of symmetric polynomials to conclude that $A$ is a rational number. But I have issues with the above claim.

$1$. I think it should be

$\dots w_j$ as a polynomial in $\theta_j$ with rational coefficients $\dots$

$2$. I do not understand how $A$ is a symmetric function in terms of $\theta_1, \ldots, \theta_n$. Of course, it is a function of $\theta_i$s but how we can conclude that it is a symmetric function? When I investigate the cases $n=2,3$ it is apparent. But it is becoming tedious to do for higher values of $n$.

What is the missing piece of the puzzle?

Is there a closed-form expression for the PDF of this random variable?

Posted: 22 Aug 2021 08:21 PM PDT

Suppose that we have \begin{equation} t=\sum\limits_{n=1}^2 \omega_n x_n=\sum\limits_{n=1}^2 t_n \end{equation} where $x_n, n=1,2$ are independent random variables, and $x_n \sim \mathcal{F}(2,2K_n)$. $\mathcal{F}(n_1,n_2)$ is the pdf of the F distributed random variable with parameters $n_1$ and $n_2$.

What I want to compute is the tail part of the distribution of $t$, i.e., $\int_g^{\infty} f_t(x)dx$.

My questions are :

Is there a closed-form expression for the PDF of $t$?

Or, Does there exist an approximation to the tail part of the distribution of $t$?

Angular Distance Between Quaternions

Posted: 22 Aug 2021 08:02 PM PDT

I spent the past couple days reading about Quaternions on various resources and there are just a couple questions that I still do not fully understand, could someone please help me out?

Also, if you find my questions ill posed and believe that I have some deeper misunderstanding, it would be great if you could point me to some resource where I can clear my confusion.

Thanks!


  1. When representing 2D rotations using complex numbers, we can apply a rotation to an arbitrary vector using the dot product operation. Why does the dot product not work for quaternions as well (and instead we need the Hamilton product $qvq^{-1}$ to apply the 3D rotation represented by $q$ to a vector $v$)?

  1. When representing two different 3D rotations $r_1$ and $r_2$ using quaternions $q_1$ and $q_2$, one can use the geodesic distance between the two quaternions as a measure of how far apart from each other (different) are those rotations:

$$\alpha=arccos(2*<q_1,q_2>^2 -1).$$

$\alpha$ is the value of the angular arc between the two quaternions in $R^4$ and varies between 0 and 180, and takes into account the fact that two quaternions are the same if the axis and angle are both negated.

  • when $\alpha=0$ then $r_1$ and $r_2$ are the same, but when $\alpha=180$ what is the relation between $r_1$ and $r_2$? Are they the inverse of one another?
  • I would have hoped that for an arbitrary vector $v$, $\alpha$ represented the angular distance between the rotated vector when rotating using $q_1$ and $q_2$, but that is not the case. Check the example below:

$$ q_1 = (cos(\pi/4), sin(\pi/4) [1,0,0]) $$ $$ q_2 = (cos(\pi/4), sin(\pi/4) [-\frac{\sqrt(3)}{2},-1/2,0]) $$

$$\alpha=arccos(2*<q_1,q_2>^2 -1)=172.318$$

Now say I have a vector $v_1=[1,2,3]$ and apply the two rotations I get the two rotated vectors $v_{11}=q_1 v_1 q_1^*=[1,-3,2]$ and $v_{12}=q_2 v_1 q_2^*=[1.3839,2.6650,-2.2320]$ and if I measure the angle between these two vectors it actually is $\beta_1 = arccos(\frac{<v_{11},v_{12}>}{||v_{11}||\cdot||v_{12}||})=142.287 \neq \alpha$.

Furthermore, for a different vector $v_2=[23,43,-15]$ the final angle (omitting the calculations for brevity) is $\beta_2=162.518$, which is again different from $\alpha$ (and also $\beta_1$).

So what does $\alpha$ represent in term of points in the 3D space? I am having a hard time understanding how does closeness between quaternions translate to closeness in the 3D space.

When and why can functions "take on" the role of vectors in defining vector spaces?

Posted: 22 Aug 2021 07:39 PM PDT

In what I call "advanced" linear algebra, we examine the properties of vectors in a vector space like an inner product space by checking that they satisfy e.g. the Cauchy-Schwarz inequality, the Triangle Inequality, and similar tests. A few pages later in a typical textbook, we are doing the same tests on functions, rather than vectors (that is, substituting functions for vectors) in these inequalities.

What allows us to do this? My limited understanding is that this can be done if the functions in question can be mapped onto vectors? Is this true, and if so why?

No comments:

Post a Comment