Saturday, April 24, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Product operation for cosets

Posted: 24 Apr 2021 09:15 PM PDT

In Artin's book, he gives a lemma that for a normal subgroup $N \subset G$ and cosets $aN$, $bN$, that the product set $(ab)N$ is also a coset; the proof simply uses the definition of normality: $$(aN)(bN) = a(Nb)N = a(bN)N = (ab)(NN) = (ab)N.$$ My first question is how to put this proof in the context of demonstrating that this product operation is well-defined, which requires proving that if $aN = a'N$ and $bN = b'N$, then $aN = bN$. I am not sure if this is in fact a proof of well-definedness and know that we can define a set product, say $AB$ of elements $ab$ where $a \in A, b \in B$. But we can't "use" this product unless it is well-defined, in which case it is by definition a coset, which leads me to beleive that Artin is in fact saying that this proof allows the operation to "work."

Can someone help me understand this?

Multivariable divergence theorem two cylinder question

Posted: 24 Apr 2021 09:14 PM PDT

Let $E$ be the intersection of the cylinders $x^{2}+y^{2} \leq 1, y^{2}+z^{2} \leq 1$. Compute the flux $$ \begin{array}{c} \iint_{\partial E} F \cdot d S \\ \text { where } F=\left(x y^{2}+\cos (y z)\right) i-\left(x^{2}+\sin (z x)\right) j+(z+\cos (x y)) k, \text { and } \partial E \end{array} $$ is oriented outward.

Any one please help me how to solve it as I totally have no idea how to go with it.

Why can we linearize the exponential regression?

Posted: 24 Apr 2021 09:06 PM PDT

When we are using observed data $(x_1,y_1)\ldots(x_m,y_m)$ for the exponential model $$y(x)=a_1\mathrm{e}^{a_2x}~(a_1>0),$$ it is natural to think about the linearized model $$\ln y=\ln a_1+a_2x.$$

It is not hard to understand this approach, but my question is, how can we prove rigorously that we are obtaining the same results? I.e., the following two optimization problems for $a_1$ and $a_2$ $$\text{minimize}\sum_{k=1}^m(a_1\mathrm{e}^{a_2 x_k}-y_k)^2$$ and $$\text{minimize}\sum_{k=1}^m(\ln a_1+a_2 x_k-\ln y_k)^2$$ yield the same $a_1$, $a_2$?

My attempts using multivariate calculus failed, and I can only say that since the problems are "equivalent" with unique solution, the answer should be unique. Is that acceptable?

Show that $E$ is orientable.

Posted: 24 Apr 2021 09:22 PM PDT

Suppose $(E,M,\pi)$ is a vector bundle and $M$ can be written as a union of two open subspaces $U$ and $V$ such that $U\cap V$ is connected. Assume that the restrictions of $E$ to $U$ and $V$ are orientable. I am stuck at this problem.

Show that $E$ is orientable.

What I know about orientability:

  1. vector bundle $(E,M,\pi)$ of rank $n$ is orientable iff $\Lambda^nE^\ast$ is isomorphic to the trivial bundle of rank $1$ over $M$.

  2. Suppose $(E, M, \pi)$ is a vector bundle and $J:(E, M,\pi)\rightarrow (E, M,\pi)$ is a homomorphism such that $J\circ J=-id$. Then $E$ is orientable.

My initial thought was to use these above two facts to conclude $E$ is orientable. However I couldn't go further than this. Any suggestion or hints are greatly appreciated.

Which is more proper: $... x, y, z \in \mathbb{R}... $ or $... x, y, z \in \mathbb{R}^3 ...$?

Posted: 24 Apr 2021 09:19 PM PDT

Some books use one and some use the other. If I had to establish my own set of rules for proof writing, which one would be better?

...oh, and if possible I would be very thankful to have some... guidance on good notational conventions in general. .

Paint balls problem

Posted: 24 Apr 2021 08:59 PM PDT

There are 4 white balls in the bag, every time we take one ball from it, if it is white, then we paint it black, if it is black, we paint it white. On average, how many times we need to paint all the balls black.

Hadwiger nelson problem only on Q rationals

Posted: 24 Apr 2021 08:54 PM PDT

In the Hadwiger-Nelson problem, any two points unit distance apart must have distinct colors. However, it is known that if we restrict the vertices to only rational numbers, the chromatic number is equal to 2 (exactly). I cannot find a proof/reference anywhere so if someone can briefly explain the intuition/outline of the result, that would be much appreciated.

What are the possible values of the other eigenvalue of $A.$

Posted: 24 Apr 2021 08:52 PM PDT

Let $M_{n}(K)$ be $n\times n$ matrices in some field $K$ and $A\in M_{n}(K)$ has order $m,$ i.e. $A^{m}=I_{n}$ for smallest positive integer $m.$

Now, if $A\in M_{2}(\mathbb{C})$ has order $12$ and $i=\sqrt{-1}$ is an eigenvalue of $A.$ What are the possible values of the other eigenvalue of $A.$

How can I start to find other eigenvalue(s) of $A$ ? Any hint or advice I will be grateful.

linear functional is continuous if and only if it is locally bounded at orgin

Posted: 24 Apr 2021 08:51 PM PDT

Given a topological vector space $X$, a functional $f:X\rightarrow\mathbb{R}$ is continuous if and only if there exists an open neighbourhood $N$ of $0$ s.t. $|f(x)|\leq 1$ for all $x\in N$.

It easy to prove if $f$ is continuous then such $N$ exists. But I do not know how to prove the reverse direction. Any help, please?

Prove that the following congruence involving Lucas sequence is true

Posted: 24 Apr 2021 08:48 PM PDT

I am proving a congruence that appears in a paper, it claims that for $t\in\mathbb{Z}$ and prime $p\geq5$ $$p\sum_{k=1}^{p-1} \frac{t^k}{k^2\binom{2k}{k}}\equiv \frac{2-v_p(2-t)-t^p}{2p}-p^2\sum_{k=1}^{p-1} \frac{v_k(2-t)}{k^3} \pmod{p^3}$$ where $v_n(P)$ is the Lucas sequence when $Q=1$, i.e. $v_n(P)=a^n+b^n$ where $a$ and $b$ are the roots of $x^2-Px+1=0$. The paper uses the following identity \begin{align*}\begin{split}p\binom{2p}{p}\sum_{k=1}^{p-1} \frac{t^k}{k^2\binom{2k}{k}} &= \frac{v_p(t-2)-t^p}{p}+p\sum_{k=1}^{p-1} \binom{2p}{p-k}\frac{v_k(t-2)}{k^2}+p\binom{2p}{p}H_{p-1}^{(2)}+\frac{1}{p}\binom{2p}{p}\\ &= \frac{v_p(t-2)-t^p}{p}+p\sum_{k=1}^{p-1} \binom{2p}{k}\frac{v_{p-k}(t-2)}{(p-k)^2}+p\binom{2p}{p}H_{p-1}^{(2)}+\frac{1}{p}\binom{2p}{p}\end{split}\end{align*} Now using the fact that $\binom{2p}{p}\equiv 2\pmod{p^3}$ and $p\binom{2p}{k}\equiv \frac{2p^2}{k}(-1)^{k-1} \pmod{p^3}$ for $k=1,\dots,p-1$. It is then easy to deduce that $$p\binom{2p}{p}\sum_{k=1}^{p-1} \frac{t^k}{k^2\binom{2k}{k}}\equiv 2p\sum_{k=1}^{p-1} \frac{t^k}{k^2\binom{2k}{k}} \pmod{p^3}$$ and \begin{align*}\begin{split}p\sum_{k=1}^{p-1} \binom{2p}{k}\frac{v_{p-k}(t-2)}{(p-k)^2} &\equiv 2p^2\sum_{k=1}^{p-1} \frac{(-1)^{k-1}v_{p-k}(t-2)}{k(p-k)^2} \pmod{p^3}\\ &\equiv 2p^2\sum_{k=1}^{p-1} \frac{(-1)^{p-k-1}v_{k}(t-2)}{(p-k)k^2} \pmod{p^3}\\ &\equiv 2p^2\sum_{k=1}^{p-1} \frac{v_{k}(2-t)}{(p-k)k^2} \pmod{p^3}\\ &\equiv -2p^2\sum_{k=1}^{p-1} \frac{v_k(2-t)}{k^3} \pmod{p^3}\end{split}\end{align*} since we have $v_k(-x)=(-1)^k v_k(x)$. Now the problem arise from the last two expression, notice that we have $$p\binom{2p}{p}H_{p-1}^{(2)}=p\Bigg(\binom{2p}{p}-2+2\Bigg)H_{p-1}^{(2)}\equiv 2pH_{p-1}^{(2)} \pmod{p^3}$$ Hence it suffices to prove that $$2pH_{p-1}^{(2)}+\frac{1}{p}\binom{2p}{p}\equiv \frac{2}{p}\pmod{p^3}$$ which is equivalent to prove that $$2p^2H_{p-1}^{(2)}\equiv 2-\binom{2p}{p}\pmod{p^4}$$ and I could not prove this, the paper says using Wolstenholme's theorem $H_{p-1}^{(2)}\equiv 0 \pmod{p}$, yet I have no idea how could I use this, any helps would be appreciated.

$E$ is orientable given condition

Posted: 24 Apr 2021 09:15 PM PDT

Suppose $(E, M, \pi)$ is a vector bundle and $J:(E, M,\pi)\rightarrow (E, M,\pi)$ is a homomorphism such that $J\circ J=-id$. Then $E$ is orientable.

What I thinking is applying this fact: vector bundle $(E,M,\pi)$ of rank $n$ is orientable iff $\Lambda^nE^\ast$ is isomorphic to the trivial bundle of rank $1$ over $M$. Any help is greatly appreciated.

May I know which steps I have gone wrong I finding the variance?

Posted: 24 Apr 2021 09:17 PM PDT

I want to find $\mathbb{V}\mathrm{ar}[Y]$, where $Y=[\min(X,4)]$ and $$f(x)=\frac{1}{5}, 0<x<5.$$ So what I have done is to use this formulae: $$\mathbb{V}\mathrm{ar}[Y]=\mathbb{E}_I[\mathbb{V}\mathrm{ar}[X\mid I]]+\mathbb{V}\mathrm{ar}_I[\mathbb{E}[X\mid I]],$$ and $I$ is an indicator function on whether the values of $x$ exceed $4.$ So, I have substituted all values: $$\mathbb{V}\mathrm{ar}[Y]=\mathbb{V}\mathrm{ar}_I[2,4]+\mathbb{E}_I[1.333333,0]$$ and we got $$\mathbb{V}\mathrm{ar}[Y]=4.3.$$

May I know which step I have gone wrong, thank you

Monotonic function that is not of bounded variation

Posted: 24 Apr 2021 09:00 PM PDT

Are all monotonic of bounded variation, or would there exist a counterexample of a function that is monotonic but not of bounded variation?

value of upper sum given partition for the following function

Posted: 24 Apr 2021 09:03 PM PDT

Consider the rectangle $I=[0,1] \times [0,1]$ and the function $f:I \rightarrow \mathbb{R}$ $f(x,y)=1, y>x$ and $f(x,y)=0, y \leq x$. Given a sequence of archimedean partitions $\{P_k\}$ find $U(f,P_k)$. After dividing up both intervals with a partition having $k$ intervals of equal length, I found that all the rectangles intersecting the diagonal take on the value of $1$, as well as all the other rectangles not intersecting the diagonal, with $y>x$. So $U(f,P_k)$ should be $k \frac{1}{k^2}+\frac{k^2-k}{2k^2}$, so that $U(f,P_k)=\frac{1}{2}(\frac{1}{k}+1)$ however I found a solution online which said the answer should be $\frac{1}{2}(1-\frac{1}{k})$. So which answer is correct? Am I correct to say there should be $k$ rectangles in the partition intersecting the diagonal such that the function takes on a max value of $1$ on these rectangles, and $\frac{k^2-k}{2}$ not intersecting the diagonal, such that $f$ takes on a max value of $1$ on these rectangles? And $f$ is zero on all other rectangles?

Generating Special Rational triangles

Posted: 24 Apr 2021 09:21 PM PDT

I'm stuck on this problem for quite some time:

Call a triangle a Special Rational triangle if it's area is rational, and the side lengths are consecutive positive integers, Can we find a closed form which generates all Special Rational triangles?

I have tried this one for quite some time, I was able to find a nice closed form in terms of a Diophantine equation but I'm totally not satisfied with it. Your insight would be very helpful.

Thanks in advance.

Are all highly composite numbers greater than or equal to 12 divisible by 12? How could this be proven?

Posted: 24 Apr 2021 09:20 PM PDT

Highly composite numbers are positive integers with more factors than all smaller positive integers. I noticed, looking through the OEIS sequence, that all the numbers listed that were greater than or equal to twelve were divisible by it. I haven't really seen this talked about before anywhere.

Let $K$ be a finite field. Prove that the product of nonzero elements of $K$ is $-1$.

Posted: 24 Apr 2021 09:21 PM PDT

As stated, let $k$ be a finite field. Prove that the product of nonzero elements of $K$ is $-1$. $|K|=q=p^r, r\geq 1, p\text{ prime}.$ It have that $K=\{a_1, a_2,\dots, a_q\}$ s.t. each $a_i$ will satisfy the polynomial $x^q=x=0$. Since this $K$ is a field, then I know it contains $1$ and $-1$. I also know that $K^{\times}$ is cycle with order $q-1$.

The problem doesn't specify how many elements of $K$ from a product that equals $-1$, so I'm assuming an arbitrary product? I have so far that $1\cdot -1=-1\in K$ and for any WLOG $a_i>0$, there exists $1/a_i, -1\in K$ s.t. $a_1\cdot \frac{1}{a_i}\cdot -1=1\cdot -1 =-1\in K$.

This seems too trivial and I'm wondering where I may have gone wrong. Thank you in advance!

What is the minimal set of comparisons that determines a monomial order?

Posted: 24 Apr 2021 09:00 PM PDT

A monomial order in $k[x_1, x_2, \ldots, x_n]$ for a field $k$ is a relation $\prec$ on the monomials such that:

  1. $\prec$ is a total order;
  2. if $m_1 \prec m_2$ then $m_3m_1 \prec m_3m_2$ for any three monomials $m_1, m_2, m_3$; and
  3. any descending chain of monomials $m_1 \succ m_2 \succ \ldots$ terminates (is finite).

Notice that, if you knew only some of the comparisons in a given monomial order, you could determine some others. For example, given $x_1 \prec x_2$, we can determine that $x_1^2 \prec x_1x_2$. Therefore, to completely determine a monomial order, it suffices to give only a proper subset of the comparisons. What is the minimal set of comparisons that must be known before the whole monomial order is known? Can this set be finite?

Initially, I thought it might be enough to simply specify the order on the variables, i.e. the degree $1$ monomials. Evidently, this is not the case, since Wikipedia adopts the convention that $x_1 \succ x_2 \succ x_3 \succ \ldots \succ x_n$ always. For example, given just those comparisons, I think that we cannot determine whether $x_1x_4 \prec x_2x_3$ or $x_1x_4 \succ x_2x_3$.

Measurability of a solution to $Ax=b$ for measurable symmetric matrix

Posted: 24 Apr 2021 08:52 PM PDT

Let $(\Omega, \mathcal F) $ be a measurable space; $A:\Omega \to \mathbb R^{n\times n} $ be a measurable matrix-valued function and $b:\Omega\to \mathbb R^n$ be a measurable vector-valued function. Assume that $A(\omega) $ is symmetric positive semi-definite and $b(\omega) \in \operatorname{Range}(A(\omega)) $ for all $\omega\in\Omega$. Hence for each $\omega$ there exists a $x(\omega)$ such that $A(\omega) x(\omega) =b(\omega) $. However, I'm wondering about:

Problem. Does there exist a choice of $x:\Omega\to\mathbb R^n$ that is measurable?

Attempts. I've thought of using Moore-Penrose inverse $A^+$, but I'm not sure how it gives a measurable function $x$. Since $x=A^+b$ is a solution and the minimizer of the corresponding linear least square problem, maybe this boils down to finding a measurable selection, for example this result comes to mind. However, I'm not sure how I should apply it, I couldn't show the requirements.

Another way is to get $x$ via Gauss-elimination. The operations can be written to be measurable, but at some point there might be infinitely many choices, so I'll be stuck at the same position of having to choose a measurable one among those solutions.

I also thought of finding the eigenvalues decomposition of $A$. Namely writing $A=Q^\top \Lambda Q$, then it's not difficult to solve for $x$ and it yields measurability as long as the decomposition of $A$ is measurable, i.e. $Q$ and $\Lambda$ are both measurable. I've heard of algorithms but there were for non-singular matrices with non-repeating eigenvalues and many restrictions. This question is related, but the argument seems non-trivial to me, I believe there is an easier way.

Any help, reference or hint is appreciated. Thanks in advance.

Let $f(X)=\|X\|$ and $g(X)=\|X\|^2$, $X\in \mathbb{R}^n$.

Posted: 24 Apr 2021 09:02 PM PDT

Let $f(X)=\|X\|$ and $g(X)=\|X\|^2$, $X\in \mathbb{R}^n$.

Give an example of points $X$ and $Y$ at distance one unit, such that.

$a)$ $|g(Y) - g(X)|> 10^{60}$

$b)$ Show that $f$ is uniformly continuous

$c)$ Show that $g$ is not uniformly continuous

For the first clause we could consider the numbers $10^{60}$ and $10^{60}+1$. For $b)$ it is seen that it is a uniformly continuous function, but I don't know how to start with my formal proof.

The probability a Markov Chain never reaches a state

Posted: 24 Apr 2021 09:21 PM PDT

Given a Discrete Time Markov Chain and an initial distribution, how do you find the probability the chain will never reach a state?

For example, an easy DTMC, knowing that it started at state 0, what would be the probability it never reaches state 2?

\begin{bmatrix}1/2&1/2&0\\1/3&1/3&1/3\\0&1&0\end{bmatrix}

My attempt: I know how to find the probability of a certain state at a time n, multiplying the initial state by the transition matrix raised to the nth power, so I was thinking I would take 1 minus this time for a sufficiently large n (to find the probability it eventually reaches the certain state), but that wouldn't guarantee the chain had never reached the certain state.

I saw this: Markov Chain never reaches a state but it didn't answer the second question.

Is there a natural way to "project" an arbitrary matrix to an orthogonal matrix?

Posted: 24 Apr 2021 09:15 PM PDT

I am dealing with an optimization problem where I need to find an optimal rotation matrix. Let me first formulate the problem.

Input:

  • An initial rotation matrix $M\in SO(3)\subset\mathbb{R}^{3\times 3}$
  • A 3D point set $P$
  • A corresponding ground truth point set $G$

The orthogonal matrix is used to rotate the points by matrix multiplication, along with other non-linear transformations. Suppose the transformations as a whole are denoted by $f_M(\cdot)$. Then the transformed point set is $P'=f_M(P)$. The problem is to find an optimal rotation matrix $M$ such that the transformed points $P'$ are closest to a ground truth set of points $G$, i.e.

$$ \min_{M}\sum_{i}\|G_i-f_M(P_i)\|_2^2 $$

where $P_i$ is the $i$-th point and $G_i$ is its corresponding ground truth point.

Since the problem is nonlinear, I try to solve it with gradient descent. In each iteration, the following steps are adopted:

  1. Compute loss $J=\sum_{i}\|G_i-f_M(P_i)\|_2^2$.

  2. Compute gradient $\partial J/\partial M$ and set $\Delta M=-\alpha(\partial J/\partial M)$ where $\alpha$ is the learning rate.

  3. Update $M\leftarrow M+\Delta M$.

The problem is, I need $M$ to be a rotation matrix, i.e. $M\in SO(3)$. But the updating formula above is very likely to violate this constraint. I can think of two ways of solving this:

  1. After each update, project $M$ to $SO(3)$.

  2. Before each iteration, parametrize $M$ with parameters that naturally represent $SO(3)$, and use gradient descent on these parameters instead.

But I am not sure how to computationally obtain the projection to $SO(3)$ or the parametrization.

Computing projective closure of $\mathcal Z(y-x^2) \subset \mathbb A^2$ in $\mathbb P^2$.

Posted: 24 Apr 2021 09:15 PM PDT

Using notation from Hartshone's Algebraic Geometry (see below), the projective closure of $\mathcal Z(y-x^2) \subset \mathbb A^2$ in $\mathbb P^2$ is the closure (in $\mathbb P^2$) of $$\begin{align*} \mathcal Z(xz-y^2) \cap U_x &= \{[a:b:c] \mid ac=b^2, a \ne 0\}\\ &=\{[a:b:c] \mid c=b^2/a, a \ne 0\}\\ &=\{[1:b/a:b^2/a^2] \mid a \ne 0\}. \end{align*}$$ where $U_x$ the complement of the zero set of $x$ in $\mathbb P^2$.

How do we proceed in computing the projective closure?




Relevant information from Hartshone's Algebraic Geometry:

enter image description here enter image description here enter image description here

Upper bounds on gaussian norm maxima, maximal inequality for 2-norm of chi-squared(m) distribution

Posted: 24 Apr 2021 09:23 PM PDT

I was reading Negahban(2012) "A Unified Framework for High-Dimensional Analysis of M-Estimators with Decomposable Regularizers", and in paragraph betwwen eq(45) and eq(46) basically claimed that tail bounds for $\chi^2$-variates satisfy below condition, which is

$E[max_{t=1,\dots,N}\Vert\epsilon_t\Vert_2]\leq\sqrt{m}+\sqrt{3\log(N)}$, where each $\epsilon_t$ is a standard Gaussian vector with m elements.

I think I understand where $\sqrt{m}$ term comes from, but couldn't use it to derive such maximal inequality.

I have studied similar maximal inequality for Gaussian maxima, but not sure with this chi-squared tail bound (or gaussian norm maxima). How can I verify this claim?

E[X-Y | Y = y ]

Posted: 24 Apr 2021 09:15 PM PDT

Exercise 12.6 Suppose X and Y are independent U~[0,1] variables.

Find the conditional expectation E(|X-Y| | Y=y).

answer is y^2 - y + 1/2 ~ but I am not sure why

I calculated 1/3 and have no idea how the y^2 - y + 1/2 is the solution ~ any help would be greatly appreciated. I am not familiar with the script to type in my answer ~ so I attached an image of my hand calculations.

So~ if z =|x-y|

enter image description here

Where did uniquation go?

Posted: 24 Apr 2021 09:17 PM PDT

Several posts both on the main site and on meta mention uniquation as a tool for searching online for websites containing some mathematical formula.1

The URL for this website provided in those answers is: http://uniquation.com/ If you try it, you can see that the link no longer works (you only get the information that this domain is for sale)

Question 1: Does somebody know whether this search engine no longer exists, or it was just moved to another domain?

Question 2: Is the source code available somewhere - so that somebody could try to run the same engine on their own hosting (at least in theory)?

Question 3: Does somebody has enough experience with uniquation (back when it used to work) to be able to say how the results compare to some other similar search engines? (For example, the ones mentioned here: How to search the internet for strings that consist mostly of math notation?)


1There are also some comments on main and on meta which mention this site. Searching in the post with this link in the whole network, I found also this post on MathOverflow Meta: Is there any third party search engine for MathOverflow? And this post on TeX Stack Exchange: What useful web services are out there?

2You can see how this website looked in the Wayback Machine. In fact, it seems that the snapshots of the search results for some searches were saved too - after I clicked on the first link in that snapshot, I got search results for $\sin(\alpha+\beta)$

Curious Binomial Coefficient Property

Posted: 24 Apr 2021 09:02 PM PDT

Let $p = 3m + 1 $ be prime.

Let $x$ be the integer closest to $0$ (not necessarily positive) such that: $$ x \equiv {2m \choose m} \pmod p$$

Then, is it true that: $$ 9 \mid p+1-x$$

For example, when $m=2$, $p=7$, then $x=-1$ and $9\mid 7+1-(-1)$

I've programmatically checked it for $p<35000$, so I'm pretty sure it's true. I don't see any obvious way to simplify the binomial coefficient nor any obvious properties that is has. I'd appreciate any help in understanding why this is the case and if there's some kind of simple underlying relationship here I'm missing.

Sum of reciprocal sine function $\sum\limits_{k=1}^{n-1} \frac{1}{\sin(\frac{k\pi}{n})}=?$

Posted: 24 Apr 2021 09:03 PM PDT

The question comes to me when I find there are answers on summation of some forms of trigonometric functions, i.e. $$ \sum\limits_{k=1}^{n-1} \frac{1}{\sin^2(\frac{k\pi}{n})}\\ \sum\limits_{k=0}^{n-1} \tan(\frac{k\pi}{n})\\ $$ Sum of the reciprocal of sine squared

Sum of tangent functions where arguments are in specific arithmetic series

To show the identity of $\sum\limits_{k=0}^{n-1} \frac{1}{\tan^2(\frac{k\pi}{n})}$ should be trivial as the summand can be rewritten as $\frac{1}{\sin^2(\frac{k\pi}{n})}-1$.

I am wondering what is the following summation: $$ \sum\limits_{k=1}^{n-1} \frac{1}{\sin(\frac{k\pi}{n})}? $$

Applying a linear operator to a Gaussian Process results in a Gaussian Process: Proof

Posted: 24 Apr 2021 09:15 PM PDT

In this paper, it is stated without proof or citation that "Differentiation is a linear operation, so the derivative of a Gaussian process remains a Gaussian process". Intuitively, this seems reasonable, as the linear combination of Gaussian random variables is also Gaussian, and this is just an extension to the case where instead of a vector-valued random variable we have a random variable defined on a function space. But I cannot find a source with a proof and the details of a proof elude me.

Proof Outline Let $x(t)\sim \mathcal{GP}(m(t),k(t, t^\prime))$ be a Gaussian process with mean function $m(t)$ and covariance function $k(t, t^\prime)$, and $\mathcal{L}$ a linear operator. For any vector $T=(t_1,...,t_n)$, let $x_T=(x(t_1),...,x(t_n))$. Then $x_T\sim \mathcal{N}(m_T,k_{T,T})$. Now consider the stochastic process $u(t)=\mathcal{L}x(t)$. It suffices to show that the finite dimensional distributions of $u(t)$ are Gaussian, but translating the action of the linear operator on $x(t)$ to the finite dimensional case is giving me trouble.

In the case of differentiation, we have $u(t)=\mathcal{L}x(t)=\frac{dx}{dt}=\lim_ {h\rightarrow 0}\frac{x(t+h)-x(t)}{h}$. For all $h>0$, the random variable $v(t)=\frac{x(t+h)-x(t)}{h}$ is normal, and by interchanging integration and the limit, we have

$$ \begin{array}{rcl} m_u(t)&=&E\left(\lim\limits_{h\to 0}\frac{x(t+h)-x(t)}{h}\right)\\ &=&\lim\limits_ {h\to 0}E\left( \frac{x(t+h)-x(t)}{h}\right)\\ &=&\lim\limits_ {h\to 0}\frac{m(t+h)-m(t)}{h}\\ &=&m^\prime(t) \end{array}$$

Of course, we need to verify when this interchange is appropriate. Similarly, we can intuit the covariance function of $u(t)$ has the form

$$ k_u(t,t^\prime)=\frac{\partial^2 x}{\partial t\partial t^\prime }k(t,t^\prime) $$

but I am having a hard time making the leap from finite approximations to the infinite-dimensional case.

Reference Request If there is any textbook or paper that does more than mention this fact in passing, please let me know.

Analyzing a mixture issue.

Posted: 24 Apr 2021 08:46 PM PDT

I am having a problem with this question:

Coffee A costs $75$ cents per pound and coffee B costs $80$ cents per pound to form a mixture that costs $78$ cents per pound.IF there are $10$ pounds of Mixture how many pounds of Coffee A are used?

According to the text the answer is 4. I don't know how they came up with this answer since they haven't given the clue or percentage of the how much of each coffee is used in the mixture. Maybe I am missing something here. Any suggestions?

Here is what I could think of:

$\$0.75 + \$0.80 = 155$ so $75$ is $\left(\frac{7500}{155}\right)$% of $155$ cents

Now taking $\left(\frac{7500}{155}\right)$% of $78$ cents (Price of the mixture) we get $\frac{1170}{31}$ cents.

Now if $75$ cents is $1$ pound $\frac{1170}{31}$ would be $\frac{78}{155} pound.

So for ten pounds it would be $\frac{780}{155}$ which is still not the answer.

No comments:

Post a Comment