Saturday, June 26, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Prove that the { →, ⊥, ↔} is functional complete. Justify use a truth table

Posted: 26 Jun 2021 09:00 PM PDT

I have thought about that since {→, ⊥} is equivalent to {¬} therefore to prove that {¬,↔} is functionally complete. But then I got stucked

Absorbing Markov Chain for Coin Flip Problem .. how do you derive the matricies

Posted: 26 Jun 2021 09:00 PM PDT

Can someone explain this answer

Markov model for coin flip - Non biased over biased

and how it relates to this:

http://www.columbia.edu/~ks20/stochastic-I/stochastic-I-GRP.pdf

So why are the trasition matricies differnet?

If there are 5 states, how can the 3x3 Q matrix that have no absorbing states?

How do you get the 2x3 R matrix?

What are the entries of these in terms of the probabilities from going from $x dollars to +1 or -1 dollars

If you have $3, then there is p chance of absorbing

If you have $1 then p chance. So if this is in the 3x3 matrix, doesn't it lead to abortion instead of transition?

How do you get the characteristic polynomial and then solve to get the closed-form for the probabilities of having between $ 0 and $ 4 dollars within k throws starting with some amount of dollars?

I am trying to go from the matrix form to the recursion form and then solving this to find the probabilities of ending with certain quantities of $ in a certain number of steps .

Trying to put all this together. Most of the guides online do not derive this well or explain the steps.

In which field $K$, elliptic curve is connected?

Posted: 26 Jun 2021 08:55 PM PDT

In which field $K$, elliptic curve is connected?

If $K$$\Bbb C $, elliptic curve over $\Bbb C $ is just homeo to torus, so connected. But in other field? If $K$$\Bbb R $, $y^2=x^3-x$ seems disconnected(I don't have confident because point at infinity may connect disconnected pieces). What about other fields?

Attempt at prove that rank theorem implies that the image of this function with constant rank is a manifold

Posted: 26 Jun 2021 08:52 PM PDT

Let $f:U\rightarrow U$, $U\subseteq\mathbb{R^m}$ open, such that $f\circ f = f$ and let $M=f(U)$. I showed that there exists neighborhood $V\subseteq U$ of M such that $f$ has constant rank on $V$, let us say $\text{rank}(f'(x))=n$ for each $x\in V$ . Now I need to prove that $M$ is a manifold.

My attempt is as follows: given $x\in M$, by the constant rank theorem, there exists diffeomorphisms $\alpha: Y\times N\rightarrow Z$ and $\beta: Z\rightarrow Y\times W$ such that $Y\times W$, $Y\times N \subseteq \mathbb{R^n}\times\mathbb{R^{m-n}}$, $Z\subseteq V$ are all open such that $x=f(x)\in Z$ and $(\beta\circ f\circ\alpha)(y,w)=(y,0)$. By taking $\phi: Y\rightarrow Z$, $\phi(y)=\beta^{-1}(y,0)$ there exists unique $(y,w)\in Y\times W$ such that $\phi(y)=\beta^{-1} (y,0)=(f\circ\alpha(y,w))\in M $, that is $\phi(Y)\subseteq M\cap Z$. Also, $\phi$ is a homeomorphism (with inverse $\pi\beta|_\phi(Y)$, where $\pi:Y\times N\rightarrow Y$ is the projection $\pi(y,n)=(y,0)$) with injective derivative (because of $\beta^{-1}$), hence a parametrization from $Y$ to a neighborhood (in $M$) of $x$.

Is that the correct idea? Also, any other proof or intuition always helps.

Selecting number of cases

Posted: 26 Jun 2021 08:52 PM PDT

A,B,C,D develop 18 items. 5 jointly by A and C,4 jointly by A and D,4 jointly by B and C,5 jointly by B and D. The number of ways of selecting 8 items out of 18 so that the selected ones belong equally to A,B,C,D are

Cohomology group of some homogeneous spaces obtained from Lie groups

Posted: 26 Jun 2021 08:43 PM PDT

Here we hope to confirm the cohomology group of a manifold that behaves as some homogeneous spaces obtained from Lie groups. In particular with the coefficients of mod 2 or finite order 2 $\mathbb{Z}/2$. We focus on one example,

$$ H^2(\frac{O(10)}{U(5)}, \mathbb{Z}/2) $$

What I know so far is that $$ \pi_0(\frac{O(10)}{U(5)})=\mathbb{Z}/2, \quad \pi_1(\frac{O(10)}{U(5)})=0, \quad \pi_2(\frac{O(10)}{U(5)})=\mathbb{Z}. $$

We also can use the universal coefficient theorem (UCT) such that $$ H^2(X,A)=Hom(H_2(X),A)\oplus Ext(H_1(X),A). $$

Am I correct to expect $H^2(\frac{O(10)}{U(5)}, \mathbb{Z}/2)=\mathbb{Z}/2$? How do you demonstrate this statement?

The open set in the subspace of a metric space.

Posted: 26 Jun 2021 08:39 PM PDT

$X_1$ is a subspace of the metric space $X$, Can every open set $G_1$ of $X_1$ be expressed as $G_1=X\cap G$ where $G$ is a open set of $X$? And how to prove it? Thanks for helping!!!

Cohomology group of Grassmannian manifold

Posted: 26 Jun 2021 08:36 PM PDT

Here we hope to confirm the cohomology group of Grassmannian manifold, also this manifold behaves as some homogeneous spaces obtained from Lie groups. In particular with the coefficients of mod 2 or finite order 2 $\mathbb{Z}/2$. We focus on one example,

$$ H^2(\frac{O(10)}{O(6) \times O(4)}, \mathbb{Z}/2) $$

What I know so far is that $$ \pi_0(\frac{O(10)}{O(6) \times O(4)})=0, \quad \pi_1(\frac{O(10)}{O(6) \times O(4)})=\mathbb{Z}/2, \quad \pi_2(\frac{O(10)}{O(6) \times O(4)})=\mathbb{Z}/2, $$

We also can use the universal coefficient theorem (UCT) such that $$ H^2(X,A)=Hom(H_2(X),A)\oplus Ext(H_1(X),A). $$

Am I correct to expect $H^2(\frac{O(10)}{O(6) \times O(4)}, \mathbb{Z}/2)=\mathbb{Z}/2$? How do you demonstrate this statement?

Relaiton between gradiant and directional derivative.

Posted: 26 Jun 2021 08:49 PM PDT

In the optimization class, there was a equation I don't quite get it.

For $f:\mathbf{R^n}\rightarrow \mathbf{R}$, $d\in \mathbf{R^n}$, $d$ is a direction for $f$.

$\lim_{\tau\to 0^+}\frac{f(x+\tau d)-f(x)}{\tau}=\nabla f(x+\tau d)^Td\bigg|_{\tau = 0}=\nabla f(x)^Td$

I don't get what is $\bigg|_{\tau = 0}$ in the equation meant. Is it means after I calculate $\nabla f(x+\tau d)^Td$, then I set $\tau = 0$ ?

If we look into the first and third part, $\lim_{\tau\to 0^+}\frac{f(x+\tau d)-f(x)}{\tau}=\nabla f(x)^Td$

It doesn't make sense. Left hand side should mean the slope of f for direction d, and the right hand side should mean using $\nabla f(x)$ as a slope, and moving toward d. How can these two are the same ?

If you can explain the equation for me thoroughly, it will help me a lot, Thanks for your help.

Derivation of the quaternionic cosine

Posted: 26 Jun 2021 08:35 PM PDT

I started with

$cos(q) = cos(a+ib+jc+kd)$

$cos(q) = cos(a)cos(ib+jc+kd)-sin(a)sin(ib+jc+kd)$

$cos(q) = cos(a)(cos(ib)(cos(jc)cos(kd)-sin(jc)sin(kd))-sin(ib)(sin(jc)cos(kd)+cos(jc)sin(kd)))-sin(a)(sin(ib)(cos(jc)cos(kd)-sin(jc)sin(kd))+cos(ib)(sin(jc)cos(kd)+cos(jc)sin(kd)))$

$cos(q) = cos(a)(cosh(b)(jcosh(c)kcosh(d)-jsinh(c)ksinh(d))-isinh(b)(jsinh(c)kcosh(d)+jcosh(c)ksinh(d)))-sin(a)(isinh(b)(jcosh(c)kcosh(d)-jsinh(c)ksinh(d))+icosh(b)(jsinh(c)kcosh(d)+jcosh(c)ksinh(d)))$

then with the help of $i^2=j^2=k^2=ijk=-1$

I got to

$cos(q) = cos(a)(cosh(b)(icosh(c)cosh(d)-isinh(c)sinh(d))-isinh(b)(isinh(c)cosh(d)+icosh(c)sinh(d)))-sin(a)(isinh(b)(icosh(c)cosh(d)-isinh(c)sinh(d))+icosh(b)(-isinh(c)cosh(d)+icosh(c)sinh(d)))$

and finally to

$cos(q) = icos(a)(cosh(b)cosh(c-d)+sinh(b)sinh(c+d))+sin(a)cosh(c-d)(sinh(b)-cosh(b))$

I'd like to know if it is correct, if it is a legitimate function, since I could't find a quaternionic trigonometric function anywhere on the internet. I found it strange since j and k don't appear in the end product.

P.S. I also tried expanding it with Taylor series but it found it cumbersome.

$\lim_n \Pi_{k=1}^n \left(\frac{1}{k^2}\cos\frac{kt}{\sqrt n})+(1-\frac{1}{k^2})\cos(t/\sqrt{n})\right)=e^{-\frac{t^2}{2}}$

Posted: 26 Jun 2021 08:54 PM PDT

I would like to prove that $$\displaystyle\lim_{n\to \infty} \ \large\Pi_{k=1}^n \left(\frac{1}{k^2}\cos\left(\frac{kt}{\sqrt{n}}\right)+\left(1-\frac{1}{k^2}\right)\cos\left(\frac{t}{\sqrt{n}}\right) \right)=\exp(-t^2/2)\ \forall t\in \mathbb{R}$$ but can't even start. The context of this limit is from an exercise of probability theory, where each factor is the characteristic function of $\frac{X_k}{\sqrt{n}}$ at point $t$, and I want to verify that the characteristic function of $\frac{S_n}{\sqrt{n}}$ converges pointwise to the characteristic function associated to the standard normal distribution, where $\{X_n\}_{n\ge 1}$ is a sequence of independent random variables and $S_n=\sum_{i=1}^n X_i$.

Center of an $R$-order in a central simple $F$-algebra

Posted: 26 Jun 2021 08:28 PM PDT

I'd like to show that

Let $R$ be a domain with the field of fraction $F$. Let $\mathcal{O}$ be an $R$-order in a $F$-algebra $B$. When $B$ is central simple, $Z(\mathcal{O})=R$.

I saw that $Z(\mathcal{O})=\mathbb{Z}$ for quaternion algebra $B$ over $\mathbb{Q}$, so I suppose this is true in other finite dimensions.

Since an order contains a $F$-basis of $B$, we have that $Z(\mathcal O) \subset Z(B)=F$.

I don't know whether it's now elementary to prove that $Z(\mathcal{O})=R$. Wedderburn's theorem can be used to show that $F$-algebra is quaternion if and only if it is central simple algebra of dimension $4$. I'm wondering if it might be helpful to prove this statement as well.

Radical of the sum of modules

Posted: 26 Jun 2021 08:25 PM PDT

Let $R$ be a commutative ring (not necessarily with a unit), $M$ be some $R$-module, and N a submodule of the module M. We define (according to Zariski and Samuel, 1958) the radical $\sqrt N$ of the submodule N as the set $\tau$ of all elements $a \in R$, some $n$-th power each of which satisfies the relation $a^n \dot M \subset N$. $\tau$ is obviously the ideal in $R$.

Is it true that $\sqrt{M_1 + M_2} = \sqrt {\sqrt M_1 + \sqrt M_2}$, where $M_1$ and $M_2$ are submodules of $M$? And if not, what is the counterexample?

Conditions for the convergence of the argmin of two random functions over a random set

Posted: 26 Jun 2021 08:23 PM PDT

Suppose I have a sequence of continuous functions $f_n, g_n \colon S \to \mathbb{R}$, where $S$ is a compact subset of $\mathbb{R}^n$, such that \begin{equation*} \sup_{x \in S} | f_n(x) - g_n(x) | \to 0 ~ w.p.1, ~ \sup_{x \in S} |f_n(x) - f(x)| \to 0 ~ w.p.1 \end{equation*} as $n \to \infty$, where $f \colon S \to \mathbb{R}$. I also have a sequence of compact sets $X_n \subset S$ and I define $v_n = \text{argmin}_{x \in X_n} f_n(x)$ and $w_n = \text{argmin}_{x \in X_n} g_n(x)$ which I can assume are unique.

I am able to assume that $X_n = \{x \in S \, \colon \, h_{nj}(x) \leq 0, ~ j = 1,\ldots,J\}$ and $X = \{x \in S \, \colon \, h_j(x) \leq 0, ~ j = 1,\ldots, J\}$ where $h_{nj}, h_j$ are continuous and $\sup_{x \in S} | h_{nj}(x) - h_j(x) | \to 0 ~ w.p.1$ for all $j$. My main goal is to show that $||v_n - w_n|| \to 0 ~ w.p.1$ as $n \to \infty$. I can assume that $v = \text{argmin}_{x \in X} f(x)$ exists and is unique.

Are the assumptions I have placed on the problem sufficient for $||v_n - w_n|| \to 0 ~ w.p.1$? And if so, how might I prove this? Keep in mind I don't need $||v_n - v|| \to 0$ or $||w_n - v|| \to 0$, just that $v_n$ and $w_n$ themselves become close.

Which of the following is not always diagonalizable?

Posted: 26 Jun 2021 08:23 PM PDT

If A € Mn (R) is diagonalizable matrix, which of the following is not always diagonalizable Is it A+A²? A-A²? A+A^T? A-A^T?

I got A+A^T as an answer but I'm not sure with it? Is there any way to prove this?

Logarithm of negative numbers

Posted: 26 Jun 2021 08:49 PM PDT

I just read the expansion of $\log(1-x) = -x -\frac{x²}{2} - \frac{x³}{3} ....$ So on letting $x = 2$ couldn't I theoretically calculate $\log(-1)$ .

Prove that with all $a,b,c \in \mathbb{R}$ [closed]

Posted: 26 Jun 2021 09:02 PM PDT

Prove that with all $a,b,c \in \mathbb{R}$ we have:

$$\frac{ab}{4a^2+b^2+4c^2}+\frac{bc}{4b^2+c^2+4a^2}+\frac{ac}{4c^2+a^2+4b^2} \leqslant \frac 13 $$

Confusion regarding intersection of diagonals

Posted: 26 Jun 2021 09:03 PM PDT

In a heptagon not more than two diagonals intersect at any point other than the vertices, then what should be the number of points of intersection of the diagonals is (excluding the vertices of this heptagon)?

Is the answer 35 or 49

35 approach will be using nC4 = 7C4 = 35

49 approach will be 14C2 - 7*4C2 14C2 total possibilities of diagonals

Total no of diagonal of heptagon = 14 No of intersection of diagonal other than at vertices= 14c2 But 4 diagonals arises from a single vertex . They will never intersect So we subtract 4c2 And there are total 7 vertices. So we subtract 7× 4c2 Answer = 14c2-7×4c2 = 49

Which one is correct please help

Position a set of points such that they are all optimally a certain distance from the origin

Posted: 26 Jun 2021 09:15 PM PDT

This is a very specific question and since I am no mathematician by trade I did not really figure out how to ask it in mathematical jargon, I apologize in advance for the inconvenience.

This is a problem I stumbled upon while doing regular linear algebra for an application for computational chemistry. It showed up in a subroutine to assemble the geometry of a large, complicated molecule which I had to do one piece at a time. I jotted down a few trials on how to solve this, but it got me stumped real fast and I can't find a way out.

Stripped from its chemical details, the problem is as such.

Suppose there is a set of points $\mathcal{S}$ in 3D Euclidian space $\mathbb{R}^3$ which is a subset of a larger set $\mathcal{A}$. Each of these points have a color. They can all be the same color or each be a different color or any combination of colors. Each color $c$ is associated with a certain distance $d_c$. This value $d_c$ is the optimal distance from the origin for the points of color $c$.

The solution to the problem would be to find the rotations and translations that transform the entire set of points $\mathcal{A}$ such that the norm of the position vector $|\vec{p}_i|$ of each point in $\mathcal{S}$ is as close as possible to each of their optimal distances.

So the question is: Is it possible to find the rotation matrices and translation vectors that accomplish that? Is there a procedure to do it?

P.S: It is important that the transformations be applied to all the points in $\mathcal{A}$ simultaneously.

Find a set of values

Posted: 26 Jun 2021 08:45 PM PDT

Is there a way to find $ \tau_1 $ and $ \tau_2 $ numerically? (Using trigonometry or whatever else…)

The journal I found this graph in (Popular Astronomy Vol. 3, p. 356; see link) mentions using dividers or lined paper, but does not mention any mathematical way… (The graph is mine, adapted from that in the article.)

EDIT: I forgot to specify that $ \lambda = 13° 15′ $, if it's of any help…

Thanks! enter image description here

What is the Probability of Head in coin flip when coin is flipped two times if coin is biased ($60\%$ chance of heads per flip)?

Posted: 26 Jun 2021 08:27 PM PDT

If unbiased coin then the probability of Head in coin flip when coin is flipped two times $= 75\%$.
$$P(H_1 \cup H_2) = \frac12 + \frac12 − \frac12 \times \frac12 = \frac34$$

But what is the probability of Head in coin flip when coin is flipped two times if the coin is biased and has a $60\%$ chance of heads rather than $50\%$?

How can I prove this integral inequality?

Posted: 26 Jun 2021 08:42 PM PDT

Let $f: [0,1] \to \mathbb{R}$ . Prove that $$\left| \int_{0}^{1}xf(x)dx \right| \leq \frac{1}{3}$$.

How can I evaluate $\int _0^1\frac{\operatorname{Li}_2\left(-x^2\right)}{\sqrt{1-x^2}}\:\mathrm{d}x$

Posted: 26 Jun 2021 08:49 PM PDT

I've been trying to find and prove that: $$\int _0^1\frac{\operatorname{Li}_2\left(-x^2\right)}{\sqrt{1-x^2}}\:\mathrm{d}x=\pi \operatorname{Li}_2\left(\frac{1-\sqrt{2}}{2}\right)-\frac{\pi }{2}\left(\ln \left(\frac{1+\sqrt{2}}{2}\right)\right)^2$$ result obtained via WolframAlpha.

Here $\operatorname{Li}_2\left(z\right)$ denotes the dilogarithm function.

However it seems very difficult to prove it, the integral equals: $$-\frac{\pi ^3}{24}+2\int _0^1\frac{\arcsin \left(x\right)\ln \left(1+x^2\right)}{x}\:\mathrm{d}x$$ Using the series representation of the dilogarithm function yields: $$\sum _{n=1}^{\infty }\frac{\left(-1\right)^n}{n^2}\int _0^1\frac{x^{2n}}{\sqrt{1-x^2}}\:\mathrm{d}x$$ Perhaps using the beta function can allow us to proceed further? any kind of hint or solution is well regarded.

Complex integral without theory of holomorphic functions

Posted: 26 Jun 2021 08:57 PM PDT

Let's $z_0=-e^{i\theta_0}$ and $z$ not in the line $(Oz_0)$

How can I compute the following integral without talking about complex logarithm and holomorphic functions?

$\int_0^1 \frac{dt}{z_0 + t(z-z_0)}$ Thank you in advance

The value is obviously $\frac{1}{z-z_0}(log(z) - log(z_0))$ where $log$ is a determination of logarithm everywhere except on $(Oz_0)$ but I'd like to compute this integral without all the theory of complex logarithms.

How would we know which slope to assign $m_1$ and to which $m_2$, while finding angle between two non vertical lines?

Posted: 26 Jun 2021 08:56 PM PDT

If we have to compute the angle between two lines when their slopes are given, then for using the formula $\tan θ=\dfrac{m_2-m_1}{1+m_2\cdot m_1}$, how would we know which slope to assign $m_1$ and to which $m_2$?

In my textbook question was given like: find the angle from the line with slope $-\frac{7}{3}$ to the line $\frac{5}{2}$ so I can't figure out which one of them should be $m_1$ and which should be $m_2 $ ?

Is this system of equations usable in linear regression?

Posted: 26 Jun 2021 08:36 PM PDT

I have a set of equations that I want to transform so that I may be able use linear regression to predict constant D:

Equation 1: $$ y = 1 -\left(\frac{6Sh^{2}}{\beta_{n}^{2}\left(\beta_{n}^{2} + Sh^{2} - Sh\right)}\right)e^{ -\frac{\beta_{n}^{2}}{R^{2}}Dt} $$ Where βn is the first root of the equation: $$ \beta_n cot⁡(\beta_n ) = 1 - Sh $$

And $$ Sh = hR/D. $$ Originally, equation 1 includes a summation for the second term of all βn roots, but numerically the 2nd+ roots can be estimated to be 0, leaving us with this simplified equation.

Known variables = h (constant), R (constant), t (independent variable, time), and y is the given dependent variable on (0,1).

I have experience with transforming linear equations for prediction using linear regression, but I am unsure if it is even possible to do so in this case given the definition of βn relying on the constant D, which is to be estimated via regression.

Any insight would be greatly appreciated, thank you.

How does rotating a circle create a spherical surface without leaving "gaps"? [closed]

Posted: 26 Jun 2021 08:41 PM PDT

EDIT: Agreed, this isn't a well formed question. But responses below have at least given me a different way to think about it.

EDIT2: Thanks the answers I discovered the Google S2 library (http://s2geometry.io/devguide/s2cell_hierarchy) which provides methods for tracing a space filling curve on a sphere. I'm working this into my camera position algorithm instead of just gimballing through altitude/azimuth. Thanks!

First post here because I'm not a math guy, but I have a feeling the confusion I have is due to something mathy. Perhaps this isn't even the right forum; if not, I'll delete.

Here's the problem:

I built an apparatus to rotate a sensor through a 360 circle and measure incident light at each point. It's pretty cool, I can automatically orient a sensor on the end of arm to maximize incident light (looks like a radar antenna) using a stepper.

I tried extending this to a sphere, but what I found was: I need to discretize the the second angle to some minimum delta-phi. It seems odd to me that I can sweep through an infinite number of degrees in a circle, continuously as my theta angle, but then I need to choose a delta-phi to rotate that circle through to scan the sphere surface.

How is it that I can rotate through an infinite number of points on a circle in one rotation in finite time, but cannot cover the infinite number of circles that comprise the surface of a sphere in finite time because I can always increment a smaller and smaller phi angle...

Or maybe the question is: how is it that an rotating circle traces out a sphere surface: aren't there always going to be more points "covered" by the circle at the poles, compared to the equator, since the distance covered by an infinitesimal delta-phi is short at the poles than the equator?

Obviously I ultimately have to discretize everything since I'm using a digital circuit, but this sudden appearance if finite/infinite stopped me in my tracks...

I'm not even sure what I'm asking, just puzzled. Anyone know what I'm up against here?

Explanation of several remarks of Gauss on representations of a given number as sum of four squares.

Posted: 26 Jun 2021 09:03 PM PDT

Remark 1: On p.384 of volume 3 of Gauss's Werke, which is a part of an unpublished treatise on the arithmetic geometric mean, Gauss makes the following remark:

On the theory of the division of numbers into four squares: The theorem that the product of two sums of four squares is itself a sum of four squares, is most simply represented as follows: let $l,m,\lambda,\mu,\lambda',\mu'$ be six complex numbers such that $\lambda,\lambda'$ and $\mu,\mu'$ are conjugate. Let $N$ denote the norm, than $$(Nl+Nm)(N\lambda + N\mu)=N(l\lambda+m\mu)+N(l\mu'-m\lambda')$$ and also $$(N(n+in')+N(n''+in'''))(N(1-i)+N(1+i))=N((n+n'+n''-n''')+i(-n+n'+n''+n'''))+N((n+n'-n''+n''')-i(n-n'+n''+n'''))$$ From this it is easy to derive the following two propositions, in which different representation of a number by a sum of four squares refer to the different value systems of the four roots, taking into account both the signs and the sequence of the roots. 1. If the fourfold of a number of the form $4k+1$ can be represented by four odd squares, then it can be represented half as often by one odd and three even squares, and vice versa, if a number can be represented in this way, that it can be represented twice as often in the first way. 2. If the fourfold of a number of the form $4k+3$ can be represented by four odd squares, then it can be represented half as often by one even and three odd squares, and vice versa, if a number can be represented in this way, than its quadruple can be represented twice as often in the first way.

Gauss than says that a certain identity of theta functions can be derived by these two theorems, namely the assertion (written here in modern notation):

$$\vartheta_{00}(0;\tau)^4 = \vartheta_{01}(0;\tau)^4 + \vartheta_{10}(0;\tau)^4$$

(Gauss denotes the three theta functions by $p(y),q(y),r(y)$).

Notes on remark 1:

  • The first identity in this passage was verifed algebrically by me, and I also suspect it might be intimately connected with quaternions.
  • The second identity follows directly from the first by substituting $\lambda = 1-i$ and $\mu = 1+i$. Since $N(1-i)+N(1+i)=4$, this identity enables one to generate new representations of an integer $4s$ as sum of four squares by simply changing the signs and order of the different numbers in the representation of $s$ as sum of four squares. For example, if $s = 13 = 2^2+2^2+2^2+1^2$ than this identity implies $52=4s = 5^2+3^2+3^2+3^2$.

Remark 2: On p. 1-2 of volume 8 of Gauss's Werke there is an additional note on the representation of numbers as sums of squares. According to Dickson's "history of the theory of numbers":

Gauss noted that every decomposition of a multiple of a prime $p$ into $a^2+b^2+c^2+d^2$ corresponds to a solution of $x^2+y^2+z^2\equiv 0 \pmod{p}$ proportional to $a^2+b^2,ac+bd,ad-bc$ or to the sets derived by interchanging $b$ and $c$ or $b$ and $d$. For $p\equiv 3 \pmod{4}$, the solutions of $1+x^2+y^2\equiv 0 \pmod{p}$ coincide with those of $1+(x+iy)^{p+1}\equiv 0$. From one value of $x+iy$ we get all by using: $$(x+iy)\frac{(u+i)}{(u - i)}$$ (where $u = 0,1,\cdots, p-1$). For $p\equiv 1 \pmod{4}, p = a^2+b^2$; then $b\frac{(u+i)}{a(u-i)}$ give all values of $x+iy$ if we exclude the values $a/b$ and $b/a$ of $u$.

Why is this interesting?

The reason why these passages interests me is that it might give a clue of answering a previous question I posted on math stack exchange Did Gauss know Jacobi's four squares theorem?.

Since these remarks show how to generate new resolutions into four squares in terms of known resolutions, Gauss seems to have developed an enumarting argument (of the amount of representations of a given number as a sum of four squares) in those remarks, an argument which might have guided him in the developement of the identity for the generation function of $\vartheta_{00}(0;\tau)^4$ (an identity which leads to Jacobi's four squares theorem); this generating function appears in p. 445 of volume 3 of his works, but with no specific reference to sums of four squares.

Questions

  • The two propositions which Gauss mentions in remark 1 are unclear to me, and I mean that the propositions themself are unclear, not their derivation. It is simply not formulated clearly. Therefore, I'd like to get an explanation of the two propositions which Gauss mentions.
  • I'd also like to understand how the theta function identity follows from the two propositions.
  • What is the meaning of the results in remark 2? they are very complicated and i don't have a clue of understanding its meaning.

Approximating midpoint of a curve

Posted: 26 Jun 2021 09:04 PM PDT

I am wondering whether there is any general method of approximating the midpoint of a given curve, given the coordinates of the endpoints and the equation of the curve. I know the calculus method of finding it exactly, but I am looking for a purely algebraic method that works for all elementary functions.

Considering it, I have come up with 4 methods:

  1. Finding the intersection of the vertical line down from the midpoint of the straight line connecting the two points, and the curve;

  2. finding the intersection of the horizontal line down from the midpoint of the straight line connecting the two points, and the curve;

  3. finding the intersection of the perpendicular line to the straight line connecting the two points and that passes through that line's midpoint, and the curve;

  4. guessing and checking via numerical integration.

    The first 2 are clearly very poor approximations, the 3rd only somewhat better, and the last, is of course undesirable as a guess and check method.

The definition I am using is based on arc length, and I just want the method to work for as many functions as possible, I don't really care if they are elementary or not.

Why only one unbounded connected component

Posted: 26 Jun 2021 09:05 PM PDT

Here on page 344 it is stated that

If $U \subset \mathbb C$ is bounded then $\mathbb C \setminus U$ has exactly one unbounded component.

While it seems sort of clear to me in an intuitive way I don't quite see how to prove it. I know that connected components of a space are closed sets but I didn't manage to use it to construct a proof. How to prove it?

No comments:

Post a Comment