Sunday, October 10, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Trying to prove a power sum divisibility conjecture

Posted: 10 Oct 2021 08:04 PM PDT

For positive integers $n$ and $a$, define the integer power sum function $$S_n(a) = 1^n+2^n+\dotsb+(a-1)^n+a^n\!.$$

I'm trying to prove

Conjecture: $(2a+3) \nmid S_{2n}(a)$

or, alternatively, find a counterexample.

Numeric calculations (a.k.a. brute-force searches) have turned up no counterexamples, and algebraic manipulation with "small" $n$ suggest that $$(2a+3) \mid d_{2n}\bigl(2^{2n}S_{2n}(a)+1\bigr),$$ where $d_{2n}$ is the denominator of the power sum function (and thus related to Bernoulli numbers/polynomials); note that $d_{2n}$ is known to be squarefree.

By well-known [classical] results, we can deduce that $2a+3$ is squarefree, and that any prime $p$ dividing $2a+3$ must satisfy $(p-1) \mid 2n$. There are lots of recurrences out there which connect $S_{2n}(a)$ to $S_k(a)$ for various $k$ (especially $0 ≤ k < n$) — really, there is no shortage of literature on the integer power sum function. It seems like it should be easy to prove this conjecture… but after much work, I'm stuck.

Any suggestions — or a proof/disproof — would be appreciated.

Prove that $|x-2y|$ is the squaring number with $x^2-4y+1=2(x-2y)(1-2y)$

Posted: 10 Oct 2021 07:52 PM PDT

For $x,y$ are positive integer, statisfying $x^2-4y+1=2(x-2y)(1-2y) \,\,\ (1)$. Prove that $|x-2y|$ is a square number.

I try to compute, from $(1)$ We have $(1) \Leftrightarrow x^2+4xy-2x-8y^2+1=0$ and $x=\dfrac{1}{4} \left(x+\sqrt{3x^2-4x+2}\right)$ or $x=\dfrac{1}{4} \left(x-\sqrt{3x^2-4x+2}\right)$. I can't show $|x-2y|$ is a square number.

Another way, I see $x=1$ and $y=0$ is a solution of $(1)$.

An ideal is homogeneous iff it is generated by homogeneous polynomials

Posted: 10 Oct 2021 07:47 PM PDT

Recall that $f \in F[x_1, \cdots, x_n]$ is homogeneous if all of its terms of monomials are of the same degree, i.e, if $f = \sum_i a_ix_1^{d_{i_1}}\cdot ... x_n^{d_{i_n}}$ then $d_{i_1} + ... + d_{i_n}$ is always the same for each $i$. And of course this means that $f(ax_1, \cdots, ax_n) = a^mf(x_1, \cdots, x_n)$ for some $a$.

An ideal $I \subseteq F[x_1, \cdots, x_n]$ is homogeneous if, for any $p \in I$, then the homogeneous components of $I$ (since every polynomial can be broken down into homogeneous components by simply selecting out the stuff of equal degree) is also in $I$.

I want to show:

An ideal is homogeneous iff it is generated by homogeneous polynomials.

$\implies$ is simple enough I think- the generating set is naturally the collection of all homogeneous polynomials in $I$.

$\impliedby$ I think an inductive argument should work. If I take a degree $n+1$ polynomial and $f$ and say that $f = f' + g$ where $f'$ is a homogeneous polynomial of degree $n+1$ and $g$ is the rest- I know that $g$ is generated by homogeneous polynomials by inductive assumption and $f' = f-g \in I$ and is a homogeneous polynomial so it is automatically generated by a homoegeneous polynomial in $I$, $f'$ itself? And together they generate $f$? Does that make sense? I am skeptical of this argument.

Gradient of $\max_yf(x,y)$

Posted: 10 Oct 2021 07:22 PM PDT

Given $p(\textbf{x})$=$\max_{y}f(\textbf{x},y)$, do we have $\nabla p(\textbf{x})=\max_{y}(\nabla_{\textbf{x}}f(\textbf{x},y))$?

Should a cusp or corner have infinite many tangents?

Posted: 10 Oct 2021 07:21 PM PDT

We know that a derivative fails at cusp or corner. As we know a corner is where you have two distinct tangent lines and a cusp is where you have one tangent line which is vertical. I realized that a cusp and corner can have infinitely many tangent lines. See the image below of what I mean. I draw vertical lines at the corner and cusp. The first left graph is y = |x| and the the right graph is y = x^(2/3). I am confused. Perhaps I do not understand what a tangent line really is. Can you explain the flaw of my logic? Thanks!

enter image description here

Find the first quadratic form of the surface $z=z(x, y)$

Posted: 10 Oct 2021 07:15 PM PDT

Find the first quadratic form of the surface $z=z(x, y)$

The surface can be seen as (0, 0, z)? If so, then I have that $E=p^2,\, F=pq$ and $G=q^2$, where $p=\partial_xz,\, q=\partial_yz$.

Am I computing the number of elements of order 2 in this dihaderal group correctly?

Posted: 10 Oct 2021 07:16 PM PDT

I am trying to solve the following problem:

Given the dihedral group:

$D_n = \{1, x, x^2, ..., x^{n-1}, y, xy, x^2y, ..., x^{n-1}y\}$

With $D_n$ being generated by $x, y$ using the relation definitions:

  1. $x^n = 1$
  2. $y^2 = 1$
  3. $yx = x^{n-1}y$

Am I am asked to determine how many elements of order $2$ are in $D_n$.

My Solution:

I identified in total of 4 elements of $D_n$ with an order of 2: $1$, $x^{\frac{n}{2}}$ ($(x^\frac{n}{2})^2 = x^n = 1$), $y$ ($y^2 = 1$), and $xy$ ($(xy)^2 = (xy)(xy) = x(yx)y = xx^{n-1}yy = x^ny^2 = 1$.

However, am I missing anything?

Helping for eigenvalue and eigenvector

Posted: 10 Oct 2021 07:31 PM PDT

I am lost in relation to a way to solve these eigenvalues and eigenvector questions. Please, someone could guide me on how can I take a solution? I will appreciate your help.

  • Let A be a real n x n matrix such that A^T = A. Show that if all n eigenvalues λj, j=1,..., n are distinct (λi different of λj for j different i), then the eigenvectors are mutually orthogonaç so that Vi^T.Vj =0 if i different of j.

  • Let A be a symmetric positive definite nxn matrix and define the Rayleigh quotient for any x different of 0 by

                             R(x)=(x^T.Ax)/(x^T.x)  

Let λ be an eigenvalue of A with corresponding eigenvalue v. Show that

                                 ∇R(v)=0,  

and that λ=R(v)

i.e. the eigenvectors are the stationary points of the Rayleigh quotient and the eigenvalues are the local maxima, minima or saddle points. Hint: note that

                     ∇R(x) = 2(1/x^T x)Ax - 2(x^T Ax/x^T x)x  

Conclude that there exists no submersion from any compact smooth manifold M to any noncompact smooth manifold.

Posted: 10 Oct 2021 07:06 PM PDT

Let f : M → N be a submersion, we conclude that f is an open map by Canonical Submersion Theorem. If M is compact and N is Hausdorff, then any submersion f : M → is a close map. If N is also connected, then any submersion f : M → is surjective. But how can I conclude that there exists no submersion from any compact smooth manifold M to any noncompact smooth manifold (e.g. R^n)?

Show or give a counterexample for each of the following affirmations.

Posted: 10 Oct 2021 07:03 PM PDT

a-) If A is an invertible matrix of size $n \times n$ with coefficients in $\mathbb K$, then there are two bases $B_1$ and $B_2$ of $\mathbb K^n$ such that $[Id]^{B_2}_{B_1}$ = A.

b-) If V is a vector space of dimension $n$, there are subspaces $W_0, W_1. . . ,W_n$ of V , such that $W_0 \subsetneq W_1 \subsetneq ... \subsetneq W_n$, and dim($W_i$) = $i$ for every $0 \le i \le n$.

c-) If $W_1$, $W_2$ are subspaces of V , then $(W_1 \cup W_2)$ \ $(W_1 \cap W_2)$ is subspace of V .

d-) If W is a subspace of V , there is some linear transformation T : V $\to$ V such that $ker$(T) = W.

e-) If dim(V ) $\lt \infty$e $W_1$, $W_2$ are subspaces of V such that every element of V is uniquely written in the form $w_1 + w_2$ with $w_i \in W_i$, so dim($W_1$) + dim($W_2$) = dim(V ).

f-) If A $\neq$ {0} is a finite subset of V , then there is B $\subseteq$ A such that B is base of $\langle A\rangle$.

a-) Let $B$ be the invertible matrix with columns $b_1,...,b_n$. As $A$ is also invertible, we can define a matrix $C$ by $C=BA^{−1}$. As $C$ is invertible, the columns $c_1,...,c_n$ of $C$ constitute a basis $B_1$.Now A is the change of basis matrix from $B_1$ to $B_2$. But this is indeed the case, as

$\sum_{i=1}^{n} A_{ij}c_i=A \begin{bmatrix}A_{1j}\\.\\.\\.\\A_{nj}\end{bmatrix}=BA^{-1}\begin{bmatrix}A_{1j}\\.\\.\\.\\A_{nj}\end{bmatrix}=B e_j=b_j,$

b-) $dim (W_0 + W_1 + ... + W_n) = dim(W_0) + dim(W_1) + ... + dim(W_n) - dim(W_0 \cap W_1 \cap ... \cap W_n ) = dim (W_0 + W_1 + ... + W_n) = dim(W_0) + dim(W_1) + ... + dim(W_n) - dim(W_n) = dim (W_0 + W_1 + ... + W_n) = dim(W_0) + dim(W_1) + ... + dim(W_{n-1})$

c-) $(W_1 \cap W_2)$ is subspace of V but $(W_1 \cup W_2)$ not not necessarily.So $(W_1 \cup W_2)$ \ $(W_1 \cap W_2)$ it is not subspace since we are removing all subspaces formed by $(W_1 \cap W_2)$.

d-) $Ker(T)$ = {$v \in V: T(V) = 0$}, by def if $W={0}$ the statement is true.

e-)$dim(W_1 \cap W_2)+dim(W_1 + W_2) = dim(W_1)+ dim(W_2)$

$dim(W_1 + W_2) \le dim(V)$

$dim (W_1 + W_2) = dim(W_1) + dim(W_2) - dim(W_1 \cap W_2)$

A finite dimensional real vector space and $W_1$ and $W_2$ vector spaces and if we have

$V =W_1 + W_2$ and $dim(W_1)+dim(W_2) \gt dim(V)$

then

$W_1 \cap W_2 \neq${O}

that is, the sum $W_1 + W_2$ is not a direct sum. In fact, if sum $W_1 + W_2$ were a direct sum we should have $W_1 \cap W_2$ ={0}. so we have

$0 =dim(W_1 \cap W_2)=dim(W_1)+dim(W_2)−dim(W_1 + W_2) =dim(W_1)+dim(W_2)−dim(V) \gt 0$

f-) If B = A then is direct that B is base of $\langle A\rangle$.

If $B \subsetneq A$ B and $dim (B) \lt dim (A)$ B cannot generate $\langle A\rangle$.

That's what I was able to do (couldn't think of counterexamples) how is it?

Thanks for the attention

Integral involving derivative of delta function

Posted: 10 Oct 2021 07:03 PM PDT

How to solve integrations of following types:

  1. $$ I_1 = \int d\mathbf{r} [ \mathbf{\nabla_r}f(\mathbf{r})]\cdot [ \mathbf{\nabla_{r'}}g(\mathbf{r'})]\{\mathbf{\nabla_r} \mathbf{\nabla_{r'}}[\delta(\mathbf{r}-\mathbf{r'})]\} $$
  2. $$ I_2 = \int d\mathbf{r} [ \mathbf{\nabla_r}f(\mathbf{r})]\cdot [ \mathbf{\nabla_{r'}}g(\mathbf{r'})]\{\mathbf{\nabla_r }[\delta(\mathbf{r}-\mathbf{r'})\cdot \mathbf{A(r)}]\} $$

here $\mathbf{r}$ and $\mathbf{r'}$ are two different vectors in 3D space, $f(\mathbf{r})$ and $g(\mathbf{r'})$ two scalar functions, and $\mathbf{A(r)}$ is a vector field.

I know that for an integral with one derivative of delta function, one can use distributional derivatives as $\int dx f(x) \delta^{(n)}(x-x_0)=(-1)^nf^{(n)}|_{x=x_0}$. But I am confused how to translate this rule for two derivatives of delta functions at different points and a derivative of delta function which dot product of some other vector field.

Product of Grobner bases and product of the ideals generated by said Grobner bases

Posted: 10 Oct 2021 07:32 PM PDT

Suppose $G, G'$ are Grobner bases for the ideals $I, I' \subseteq F[x_1, \cdots, x_n]$ respectively. Then is it the case that $GG'$ is a Grobner bases for the ideal $II'$?

My intuition on this is no- but I am unable to come up with a concrete counterexample.

My own attempt- I have done a bunch of scratchwork- a mess of polynomial multiplications and coefficients, but I don't think I have much to concretely show unfortunately.

Solve $\frac{dy}{dx}=(y+1)(y-1)$ that passes through the point $(x,y)=(1,0)$

Posted: 10 Oct 2021 07:39 PM PDT

I need a bit of help to solve the following problem:

Solve $$ \frac{dy}{dx}=(y+1)(y-1) $$ that passes through the point $(x,y)=(1,0)$

I know it is separable and when I integrate I get $$ -\frac{1}{2} \ln \left(y+1\right) + \frac{1}{2} \ln \left(y-1\right) = x+c_1 $$ and when I sub in $(x,y)=(1,0)$ I can get it down to $.5\ln(-1)=0+c$ which is undefined. Am I on the right track? I am at a bit of a wall and definitely need some help

A question about Axiomatic Set Theory: Axiom on $\in$ relation

Posted: 10 Oct 2021 07:23 PM PDT

I am exploring Axiomatic set theory out of curiosity, and was referring to https://youtu.be/AAJB9l-HAZs as an introduction to the subject. My question is here (time in video 10:47) :-

The first axiom of the theory is stated as this: Axiom on $\in$-relation: $x \ \in\ y$ is a proposition if and only if x and y are sets.

This does not seem like an axiom that is in an axiomatic system. An axiom in an axiomatic system is a formula that is not a tautology in that logic system, but is assumed to be a universal truth (Please correct me if I am incorrect). To me, this "axiom" seems to just be saying this - $\in$ is a (binary) predicate. The variables on which the truth value of $\in$ depends are called sets.

If I remember correctly, there exist some formulas in predicate logic which can neither be proved/disproved in the logic. Perhaps the professor wants to say that the only if the formulas can be proved/disproved, the variables are sets? He goes on to 'provide an application of this axiom' by showing how Russel's paradox is avoided. To me, it simply seems like he demonstrates how the formula: $\exists u ((\forall z(z \in u \iff \lnot z \in z)))$ can neither be proved nor disproved using predicate logic. (I may be extremely wrong here). Since it can neither be proved nor disproved, $u$ is not a set?

Another thing is the formal formula of the axiom which is stated: $\forall x \forall y \ ((x \in y) \oplus \lnot (x \in y))$ - I think that this is simply a tautology? Why is this an axiom?

How can we prove that there is only 1 unique matrix A for a transformation T?

Posted: 10 Oct 2021 07:22 PM PDT

Denote $T$ is a transformation with domain $\mathbb{R}^n$ and codomain $\mathbb{R}^m$ which can be represented by matrix multiplication $Ax$. Let $\{e_1, e_2, \dots, e_n\}$ be the standard basis for $\mathbb{R}^n$. Then $Ax=T(x)$ and columns in $A$ are $T(e_1),T( e_2), \dots, T(e_n)$. In this stage, how can we prove that this is a unique choice?

Prove or disproving functions in algorithms [closed]

Posted: 10 Oct 2021 07:48 PM PDT

I am wondering what is the simplest way to prove or disprove f(n) =O(f(2n)). I have tried substitution methods, and it did not seem to be very helpful. What else should be tried?

factoring a cyclotomic polynomial modulo a prime

Posted: 10 Oct 2021 07:20 PM PDT

I am having trouble with this: it appears that $x^{12} - x^6 + 1$ has a root $\pmod p$ only when $p \equiv 1 \pmod{36},$ in which case it completely factors as twelve linear terms.

It is easy to show that any prime for which there is a root satisfies $p \equiv 1 \pmod{12},$ from $$ x^{12} - x^6 + 1 = (x^6-1)^2 + x^6 = (x^6)^2 - x^6 + 1 $$

There are plenty of interesting factorizations modulo other primes, it is just the all terms are degree two or higher.

Oh, this comes from an earlier question, here is the comment I made there: Find another sum of squares for $3^{12}-6^6+2^{12}$

Let me paste in some data; Meanwhile, the question is, does $x^{12} - x^6 + 1$ have a root $\pmod p$ only when $p \equiv 1 \pmod{36} \; \; ? \; \;$

jagy@phobeusjunior:~$ ./count_roots    prime    37  count   1 p mod 36  : 1  73  count   2 p mod 36  : 1  109  count   3 p mod 36  : 1  181  count   4 p mod 36  : 1  397  count   5 p mod 36  : 1  433  count   6 p mod 36  : 1  541  count   7 p mod 36  : 1  577  count   8 p mod 36  : 1  613  count   9 p mod 36  : 1  757  count   10 p mod 36  : 1  829  count   11 p mod 36  : 1  937  count   12 p mod 36  : 1  1009  count   13 p mod 36  : 1  1117  count   14 p mod 36  : 1  1153  count   15 p mod 36  : 1  

Classifying prime ideals of $\mathbb{Q}[x]$

Posted: 10 Oct 2021 07:07 PM PDT

I have taken an undergraduate course in algebraic geometry and after that wanted to do some self-study with use of the "Foundations of Algebraic Geometry" textbook by Ravi Vakil. The course was very basic, we've only started talking about sheaves and schemes towards the end of it. I decided to start with the chapter about schemes, since I have already taken an introductory course in sheaf theory, but got stuck very early, namely on the exercise 3.2.C, which reads: Describe the set $\mathbb{A}^{1}_{\mathbb{Q}}$. It does not ask about the structure sheaf, so in practice it comes down to classifying prime ideals of $\mathbb{Q}[x]$. My attempt went as follows:

We know that every ideal in $\mathbb{Q}[x]$ is a principal one, so we can associate an element of $\mathbb{A}^{1}_{\mathbb{Q}}$ with the zeroes of the polynomial generating it and the zeroes of polynomials over $\mathbb{Q}$ are algebraic numbers, so I need to get all the algebraic numbers up to some equivalence. So we obviously have the rational numbers associated with the ideals of the form $(x-q)$, where $q$ is rational and the $0$ ideal.

Unfortunately that was the only thing I managed to classify (which wasn't even really done by me, since it was shown before the exercise). I am not even entirely sure what would it mean to classify them, it was very easy for the case of complex and real polynomials, but here, there seems to be much more variation here. For the numbers expressible with radicals, I believe I can show that two such numbers are zeroes of the same irreducible polynomial when they only differ by the sign of roots of even degree (so, for example, $2+\sqrt{5}+\sqrt[7]{2}$ would be equivalent to $2-\sqrt{5}+\sqrt[7]{2}$), but doing only that would not help me classify the polynomials with such roots and even if I did manage to do that, there are still all the algebraic numbers not expressible with radicals, with which I have no idea how to begin looking for a possible equivalence describing which of them must be zeroes of the same polynomials and how to describe those polynomials.

To formulate a question, how would one approach this? Is it a good first step to separate them into cases (those expressible with radicals and those that are not)? I don't necessarily look for a full solution, but maybe some clue or a key fact that should be used.

What's the measure of the circumradius of triangle ABC?

Posted: 10 Oct 2021 07:20 PM PDT

For reference(copy exct of the question): In the acute triangle ABC, the distance between the feet of the relative heights to sides AB and BC is 24. Calculate the measure of the circumradius of triangle ABC. $\angle B = 37^o$

(Answer:$25$}

My progresss

my figure and the relationships I found

I tried to draw $D\perp AC$ in $H$ $\implies \triangle DCH$ is notable ($37^o:53^o$) therefore sides = $3k-4k-5k$

FE is a right triangle ceviana...but I can't see where it will go into the solution. enter image description here

Polar coordinates clarification why is the radians portion wrong despite right angle triangle? Converting $(x,y) = (-6, 2\sqrt{3})$

Posted: 10 Oct 2021 07:14 PM PDT

Trying to plot polar coordinates and got all the right steps until the end where I need to evaluate $\arctan \frac{\sqrt{3}}{-3} = \frac{\pi}{-6}$, but that puts the point on the wrong quadrant. Where did I go wrong?

enter image description here

The distance of a point to a set

Posted: 10 Oct 2021 07:27 PM PDT

Suppose $F$ is a closed set in $\mathbb{R}^n$ with $n>1$, and $x$ a point outside $F$. We know that there exists a point $y\in F$ such that the distance between $x$ and $F$ is attained at $y$, i.e., d$(x,F)=d(x,y)=a>0$. Suppose $b>a$. How to prove that there is a point $z$ such that $x$ belongs to the segment $[z,y]$ and such that d$(z,y)=b?$ This is clear intuitively, and notice that if $b<a$, then $z\in [x,y]$ and the result is a consequence of intermediate value theorem.

using Gauss's theorem to find symmetries in 2nd order PDEs

Posted: 10 Oct 2021 07:05 PM PDT

This problem arises in fluid dynamics and has me stumped. Consider a scalar function $\delta(\mathbf{x})$ and a vector function $\Phi(\mathbf{x})$ that satisfy $$ \nabla \cdot\left[(1-\delta)\nabla\Phi_i\right] = 0 $$ for each component $i$. Away from the origin, $\delta \to 0$ and $\Phi_i \to x_i$ as $r \to +\infty$. I am trying to prove the following symmetry, for a sufficiently large volume $V$ centered at the origin: $$ \int \left[\left(\Phi_i - x_i\right)\frac{\partial\delta}{\partial x_j} - \left(\Phi_j - x_j\right)\frac{\partial \delta}{\partial x_i}\right]\,\textrm{d}V = 0 $$ It was claimed to me that Gauss's theorem can be used in such a proof.

I have tried a number of approaches. Simpler versions of the problem (piecewise constant $\delta$, 1D simplifications) are trivially true. I have been unable to find any more complex closed analytic forms for $\delta$ and $\Phi$ that satisfy the above PDE, so haven't been able to work with a concrete example. This problem comes from considering wave scattering in an inhomogeneous medium ($\delta$ models the inhomogeneity), so I know from the physical symmetry that the result above must be true. I suspect that the integrand may be expressible as a divergence to which Gauss's theorem can be applied, but I have not been successful.

I appreciate any insight here.

sturm-liouville with negative weight function

Posted: 10 Oct 2021 08:02 PM PDT

Suppose the following SL problem

$$ y''(t) + \lambda w(t)y(t)=0 \\ y(0)=0, \qquad y'(1)=0 $$ with $w\in C([0,1])$ a weight function.

If we take $w=f$, for a certain $f\in C([0,1]), f>0$, it is well-known that all the eigenvalues are positive and ordered.
If now we take $w=-f$, can we infer that

  1. the eigenvalues are all negative;
  2. the eigenvalues are those for the choice $w=f$, but with opposite sign?

Can you also provide references for this?

Thank you

How can I evaluate the normed cumulative distribution transform explicitly?

Posted: 10 Oct 2021 07:24 PM PDT

Given measures $\mu_0, \mu_1$ on $X,Y \subset \mathbb R$ with densitites $f_0:X\to \mathbb R $ and $f_1:Y \to \mathbb R$ and a measurable map $T:X \to Y$ that pushes $\mu_0$ onto $\mu_1$ such that $$\int_{T^{-1}(A)}d\mu_0=\int_Ad\mu_1 \quad$$ for any Lebesgue-measurable $A \subset Y$. Through Lebesgue integration we get $$\int_{\inf X}^x f_0(\tau)d\tau=\int_{\inf Y}^{T(x)} f_1(\tau)d\tau. \quad (*)$$ One can then define the Cumulative Distribution Transform (CDT) of $f_1$ w.r.t the reference $f_0$ which we denote by $\hat{f_1}:X \to \mathbb R$ given by $$\hat{f_1}(x)=(T(x)-x)\sqrt{f_0(x)} \quad (**)$$ where $T$ satisfies $(*).$

I now want to compute the CDT in the $L_2$-norm explicitly for the example $X=[0,1], Y=\mathbb R$ and $f_0(x)=1$ and $f_1(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}$.

So in order to compute the CDT, we need to solve for $T:[0,1] \to \mathbb R$ by plugging into $(*)$:

$$\int_{-\infty}^{T(x)}f_1(\tau)d\tau =\int_{-\infty}^{T(x)}\frac{1}{\sqrt{2\pi}}e^{-\frac{\tau^2}{2}} d\tau = \int_0^1 1 d\tau = x.$$ By setting $\phi(x):=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-\frac{\tau^2}{2}} d\tau$ we can formulate it as $$\phi(T(x))=x$$ where $\phi$ is monotonically increasing with an existing inverse, hence $T(x)=\phi^{-1}(x)$. Finally plugging it into $(**)$ yields $$\hat{f_1}(x)=(T(x)-x)\sqrt{f_0(x)}=\phi^{-1}(x)-x.$$

My question now is, how can I compute $$\Vert\hat{f_1}\Vert_{L^2}^2=\int_0^1(\phi^{-1}(x)-x)^2 dx?$$ I am on the one hand unsure how to take the inverse of $\phi$ and also wouldn't know how to then evaluate the integral. Any online tools for that?

A curious infinite product of factorials

Posted: 10 Oct 2021 07:46 PM PDT

I found the following infinite product of factorials without proof: $$\small\prod_{n=1}^\infty\frac{{(2 n)!}^{20}\,{(8 n)!}^{32}\,{(32 n)!}^2}{{n!}^4\,{(4 n)!}^{37}\,{(16 n)!}^{13}}=\\\frac{\sin ^{14}\!\frac{\pi }{8}\cdot\sin \frac{\pi }{32} \cdot\sin \frac{3 \pi }{32} \cdot\sin \frac{5 \pi }{32} \cdot\sin \frac{7 \pi }{32}}{\sin ^6\!\frac{\pi }{16} \cdot\sin ^6\!\frac{3\pi }{16}}\cdot \frac{2^{1283/64}\,\pi^{14}\,\Gamma^{10} \!\left(\frac{1}{8}\right) \Gamma^2\! \left(\frac{5}{32}\right) \Gamma^2 \!\left(\frac{7}{32}\right)}{\Gamma^{18} \!\left(\frac{5}{8}\right) \Gamma^{10} \!\left(\frac{1}{16}\right) \Gamma^{10} \!\left(\frac{3}{16}\right) \Gamma^2 \!\left(\frac{17}{32}\right) \Gamma^2 \!\left(\frac{19}{32}\right)}.$$ We can verify that $$\small\frac{{(2 n)!}^{20}\,{(8 n)!}^{32}\,{(32 n)!}^2}{{n!}^4\,{(4 n)!}^{37}\,{(16 n)!}^{13}}=1+\mathcal O\left(n^{-3}\right),$$ so the product indeed converges.

  • Can you suggest how to prove its closed form on the right?
  • Is it possible to further simplify it?
  • Is it possible to find a simpler convergent infinite product of this form (involving only integer powers of factorials of integer multiples of $n$)?

Nonlinear Least Squares Fit to Custom Function in C#

Posted: 10 Oct 2021 07:03 PM PDT

I have a pretty straightforward regression problem I need to solve within the .NET framework in C#. I have a nonlinear transfer function $$ Z = A + \frac{1}{\frac{1}{\frac{X}{256}*B+3*C}+\frac{1}{\frac{Y}{1024}*D+2*E}} $$ with two independent variables $$ X, Y $$ and five parameters $$ A, B, C, D, E $$ for which I need to optimize.

I've been able to successfully model and solve this problem with MATLAB's fit and Python Scipy's curve_fit with very accurate results. However, I have not been able to find a solution available to C#. There are a couple MIT licensed Nuget packages available but they either only support a single independent variable/no custom function (Math.NET) or have poor results (Accord.NET - which may very well be my fault as evidenced here).

I was hoping someone here would have some ideas for alternative solutions or could point me in the right direction in writing my own custom method to solve this. MATLAB used the Trust Region Reflective algorithm and Scipy used either Levenberg-Marquardt or Trust Region Reflective according to their documentation. The data I am working with is

$$ \begin{gather} X = \{0, 128, 255, 0, 128, 255, 0, 128, 255\}\\ Y = \{0, 0, 0, 512, 512, 512, 1023, 1023, 1023\}\\ Z = \{89.66623397, 122.9866434, 123.8610312, 197.274736, 4255.419371, 7129.346848, 197.8635428, 4655.314692, 8335.298909\} \end{gather} $$

with a best guess of

$$ \begin{gather} A = 20\\ B = 10000\\ C = 50\\ D = 50000\\ E = 60 \end{gather} $$

and I would expect to get

$$ \begin{gather} A = 27.85\\ B = 9886.98\\ C = 56.87\\ D = 48581\\ E = 48.47 \end{gather} $$

Any direction or ideas are greatly appreciated.

Difference between topology and sigma-algebra axioms.

Posted: 10 Oct 2021 07:25 PM PDT

One distinct difference between axioms of topology and sigma algebra is the asymmetry between union and intersection; meaning topology is closed under finite intersections sigma-algebra closed under countable union. It is very clear mathematically but is there a way to think; so that we can define a geometric difference? In other words I want to have an intuitive idea in application of this objects.

What is the probability of two people telling the truth?

Posted: 10 Oct 2021 08:00 PM PDT

I have a similar problem described in this question: How to find the probability of truth?

The question reads:

A and B are independent witness in a case. The probablity that A speaks the truth is 'x' and that of B is 'y'.If A and B agree on a certain statement, how to find the probability that the statement is true ?

In my problem, I also know that the probability for the statement to be true is $p$.

The accepted answer starts with:

Let:

$A_t$ stand for "A says statement is true." and $A_f$ for "A says statement is false" and

$B_t$ stand for "B says statement is true." and $B_f$ for "B says statement is false" and

$S_t$ stand for "Statement is true" and $S_f$ for "Statement is false" and

Then, we know that:

$\text{Prob}(A_t | S_t) = \text{Prob}(A_f | S_f) = x$, and

$\text{Prob}(A_t | S_f) = \text{Prob}(A_f | S_t) = 1-x$, and

$\text{Prob}(B_t | S_t) = \text{Prob}(B_f | S_f) = y$, and

$\text{Prob}(B_t | S_f) = \text{Prob}(B_f | S_t) = 1-y$, and

My question is: aren't the equations above wrong?

Shouldn't $\text{Prob}(A_t \cap S_t) = \text{Prob}(A_f \cap S_f) = x$?

Shouldn't $\text{Prob}(A_t | S_t) = \frac{\text{Prob}(A_t \cap S_t)}{\text{Prob}(S_t)} = \frac{x}{p}$?

As I understand it, "A speaks the truth" means that A agrees on the statement and the statement is true, i.e. "A speaks the truth" is the intersection between $A_t$ and $S_t$.

Or, equivalently, "A speaks the truth" means that A doesn't agree and the statement is false, i.e. it is the intersection between $A_f$ and $S_f$.

UPDATE: with a numerical example, I found out that I'm wrong and the answer may be right. I found out that for certain $x$ and $p$ (from 0 to 1), $x / p$ may be greater than 1, therefore it can't be a probability. So, where is the fallacy in my thought?

No comments:

Post a Comment