Saturday, June 5, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Applications of intuitionistic logic

Posted: 05 Jun 2021 07:58 PM PDT

Can anybody enlighten me about the applications of intuitionistic logic? I am familiar with this system only by G.Takeuti's book, where it is described as one of the examples of axiomatic systems of logic. Is it possible to explain to a non-specialist why this system deserves studying?

I suspect that this is easier to explain on the example of propositional intuitionistic calculus, so if somebody could cast a light on this, this would be especially valuable.

Jacobian for this function (numerical on MATLAB)?

Posted: 05 Jun 2021 07:51 PM PDT

I have two vectors $r$ and $m$. Both vectors are $N\times1$.

A function is calculated as -

$F(1:N) = \phi r + (r^3 + rm^2)$

$F(N+1:2N) = \phi m + (m^3+mr^2)$

I am having trouble calculating the Jacboian for this function. Here's what I did -

J11 = phi*eye(N) + (3*r.^2 + m.^2);  J12 = 2*r.*m;  J21 = 2*m.*r;  J22 = phi*eye(N) + (3*m.^2 + r.^2);  J = [J11 J12; J21 J22];  

where J is the Jacobian. I believe the problem is in the part of the function where $r$ and $m$ are multiplied with each other, but I am not 100% sure about this and even if I was, I am not sure how I could fix this.

The code above is written in MATLAB. Any help would be really appreciated.

can $1/\overline{X}_n$ become integrable only for larger $n$?

Posted: 05 Jun 2021 07:50 PM PDT

Suppose $X_1,\ldots,X_n$ are iid with a finite mean not equal to $0$ and let $\overline{X}_n$ be the sample mean. Though $E(1/\overline{X}_1)=E(1/X_1)$ may be infinite, could it be that $E(1/\overline{X}_n)$ becomes finite for some $n$? Can conditions be given to ensure this? This boils down to concentration of a mean (assuming just 1 moment) away from $0$, about which I know very little.

This question is a follow-up to this question, which asks if the reciprocals of sample moments are uniformly integrable. I want to focus on the first sample moment: Is $1/\overline{X}_n$ UI? A necessary condition would be that for large enough of $n$, $E(1/|\overline{X}_n|)$ be integrable. If $E(X_1)=0$, that would not be possible. If the density of $\overline{X}_n$ (assume it has a pdf) is nonzero at $0$, then the reciprocal wouldn't be integrable. If the density is bounded away from $0$, the problem looks too easy. But suppose the density of $\overline{X}_n$ is nonzero for small $n$ and becomes $0$? For example when the common distribution of $X_i$ is beta(1/2,1/2), the density is infinite at $0$ for $n=1$, but numerical simulations show the density of $\overline{X}_n$ at $0$ sure looks to be $0$ once $n$ is greater than 3 or 4.

A regular curve is closed if and only if there it has a regular periodic extension

Posted: 05 Jun 2021 07:49 PM PDT

I have to prove the following statement:

A regular curve $\gamma:[a,b]\to\mathbb{R}^n$ is a closed curve if and only if there exists a periodic regular curve $\tilde\gamma:\mathbb{R}\to\mathbb{R}^n$ with period $b-a$ such that $\tilde\gamma(t)=\gamma(t)$ for all $t\in[a,b]$

I did the following proof, and I would be thankful if someone could verify it, tell me how to fix it if it is wrong, give me comments and tips, or share your own proofs if you want to.

Proof.

$(\implies)$ Suppose $\gamma$ is a closed curve. For each $x\in\mathbb{R}$ we define

$$ E_x=\{k\in\mathbb{Z}:x\leq b+k(b-a)\}, K_x=\text{min}(E_x) $$

It is clear that $K_x=\left[\dfrac{x-b}{b-a}\right]+1$, where $[\cdot]$ denotes the floor function. Futhermore it is easy to see that:

$$ x\leq b + K_x(b-a) \implies x-K_x(b-a)\leq b $$

and

$$ x\geq b + (K_x-1)(b-a)=a+K_x(b-a) \implies x-K_x(b-a)\geq a $$

therefore it happens that $x-K_x(b-a)\in[a,b]$ for all $x\in\mathbb{R}$. Now we define $\tilde\gamma:\mathbb{R}\to\mathbb{R}^n$ by $\tilde\gamma(x)=\gamma(x-K_x(b-a))$. Lets see that $\tilde\gamma$ has period $b-a$. Let $w\in\mathbb{Z}$. First notice that:

$$ k\in E_x \iff x\leq b+k(b-a) \iff x+w(b-a)\leq b+(k+w)(b-a) \iff k+w\in E_{x+w(b-a)} $$

from where we have $E_x=E_{x+w(b-a)}$ and by definition of $K_x$ we have $K_x=K_{x+w(b-a)}-w$. This way we have:

$$ \tilde\gamma(x+w(b-a))=\gamma(x+w(b-a)-K_{x+w(b-a)}(b-a))=\gamma(x-(K_{x+w(b-a)}-w)(b-a)) =\gamma(x-K_x(b-a))=\tilde\gamma(x) $$

Therefore, $\tilde\gamma$ has period $b-a$. Lets now see that $\tilde\gamma$ is differentiable in all of $\mathbb{R}$. Let $c\in\mathbb{R}$. If $c$ is not of the form $b+w(b-a)$ for some $w\in\mathbb{Z}$ then $\dfrac{c-b}{b-a}\notin\mathbb{Z}$ and this way the function $g(x)=\left[\dfrac{x-b}{b+a}\right]+1=K_x$ has a derivative in $c$ and $g'(c)=0$. This implies that $x\mapsto x-K_x(b-a)$ is differentiable in $c$ and has derivative $1$ from where we have that $\tilde\gamma$ is differentiable in $c$ and by the chain rule:

$$ \tilde\gamma'(c)=\gamma '(c-K_c(b-a))\neq 0 $$

Now, if $c=b+w(b-a)$ for some $w\in\mathbb{Z}$, notice the following:

$$ \lim_{x\to c^-}\dfrac{\tilde\gamma(x)-\tilde\gamma(c)}{x-c}=\lim_{x\to c^-}\dfrac{\tilde\gamma(x)-\tilde\gamma(b+w(b-a))}{x-b-w(b-a)}=\lim_{x\to c^-}\dfrac{\tilde\gamma(x)-\tilde\gamma(b)}{x-b-w(b-a)} $$

Considering the change of variables $x=y+w(b-a)$ and considering that:

$$ \lim_{y\to b^-}y+w(b-a)=b^-+w(b-a)=c^- $$

we have that, restricting the second limits on a "left-neighbourhood" of $b$ contained in $[a,b]$

$$ \lim_{x\to c^-}\dfrac{\tilde\gamma(x)-\tilde\gamma(c)}{x-c}=\lim_{y\to b^-}\dfrac{\tilde\gamma(y+w(b-a))-\tilde\gamma(b)}{y+w(b-a)-b-w(b-a)}=\lim_{y\to b^-}\dfrac{\gamma(y+w(b-a)-w(b-a))-\gamma(b)}{y-b}=\lim_{y\to b^-}\dfrac{\gamma(y)-\gamma(b)}{y-b}=\gamma '(b) $$

Similarly, we have that:

$$ \lim_{x\to c^+}\dfrac{\tilde\gamma(x)-\tilde\gamma(c)}{x-c}=\gamma '(b) $$

and this way $\lim_{x\to c}\dfrac{\tilde\gamma(x)-\tilde\gamma(c)}{x-c}=\gamma '(b)$ from where $\tilde\gamma'(c)=\gamma'(b)\neq 0$.

In conclusion, $\tilde\gamma$ is differentiable in all of $\mathbb{R}$ and its derivative never vanishes, which implies that $\tilde\gamma$ is a regular curve. It is also easy to see that $\tilde\gamma|_{[a,b]}=\gamma$. We get that $\tilde\gamma$ is the desired curve.

$(\impliedby)$ Suppose there exists a periodic regular curve $\tilde\gamma:\mathbb{R}\to\mathbb{R}^n$ with period $b-a$ such that $\tilde\gamma(t)=\gamma(t)$ for all $t\in[a,b]$. It is known that all the derivatives of $\tilde\gamma$ must also have period $b-a$. This way, if $n\in\mathbb{N}$ then

$$ \gamma^{(n)}(a)=\tilde\gamma^{(n)}(a)=\tilde\gamma^{(n)}(a+b-a)=\tilde\gamma^{(n)}(b)=\gamma^{(n)}(b) $$

which implies that $\gamma$ is a closed curve.

Finite sum for first $n$ terms of this sequence

Posted: 05 Jun 2021 07:56 PM PDT

Is there a finite sum for the first $n$ terms in this sequence? $\sum_{k=1}^{n}\frac{1}{k^2+1}$

synthetic divide polynomial $(2x^3+7x^2-13x-3) \div (x+3)$

Posted: 05 Jun 2021 07:56 PM PDT

I need to figure out how synthetic division works. The problem is $$ (2x^3+7x^2-13X-3) \div (2x-3)$$ I can do the long division. $$\require{enclose} \begin{array}{r} x^2+5x+1 \\ (2x-3) \enclose{longdiv}{2x^3+7x^2-13x-3}\\ \underline{-2x^3+3x^2} \phantom{100000000} \\ 10x-13x \phantom{1000} \\\underline{-10x+15x} \phantom{1000} \\ 2x-2 \\\underline{-2x+3} \\ 0 \end{array}$$
But when I do synthetic division my answer is slightly different;

$$ 3/2 \left[ \begin{array}{r}{2 \phantom{10}7 \phantom{2}-13\phantom{2}-3} \\ \underline{ \phantom{100}3 \phantom{100} 15 \phantom{1000} 3} \\ 2 \phantom{1}10\phantom{1000} 2 \phantom{1000} 0\end{array} \right] = 2x^2 +10x +2$$

I just realized that I could factor out the two from the polynomial after I synthetic divide. The answer then would be; $$2(x^2+5x+1)$$ The only problem I have is the two graphs would be different. The second graph would stretch along the y axis. If I put a 1 in for x then y =14 a fter synthetic division. Y would = 7 if I put a 1 in for x after I use long division. I am not sure what to do next.

Could anyone help me to solve this problem

Posted: 05 Jun 2021 07:29 PM PDT

Honestly I've never been good with logarithms, they mess my head.

In the equation; log10(ax)log10(bx)+1=0
with a>0, b>0 and x>0

>=a/b>

I don't know how to find the answear for this, I tried to expand the expression and change 1 to the rigth adding the base then to -1 and try to solve from there but it was useless the far I've been is ; log10(a)log10(b)+log10(a)log10(x)+log10(b)log10(x)+log10(x)^2+1=0

Sorry I couldn't write properly the problems.

Thanks.

Numerical integration by matlab.

Posted: 05 Jun 2021 07:25 PM PDT

I want a matlab code for my report about paper "propose numerical integration method" to help me for certain values of approximation integral. I am weak in matlab but i want code to my report.

https://www.researchgate.net/profile/Louis-Asiedu-3/publication/303800212_A_Proposed_Method_for_Numerical_Integration/links/576284fa08aedc8ccc817de6/A-Proposed-Method-for-Numerical-Integration.pdf

This is a link of paper

The method exist in the first page 6 of the paper

"Equation number (12)".

How can I expresse this function in terms of u and v?

Posted: 05 Jun 2021 07:28 PM PDT

Consider the following equations \begin{equation} u = x \cos(y)\ \ \text{and}\ \ v=x \sin(y) \end{equation}

Show that near to $(x_0,y_0)$, with $x_0\neq 0$, $(x,y)$ can be written as a differentiable function of $(u,v)$.

It's seems simple, but I have no idea to start from, so I haven't done anything.

Find, the marginal functions of $f(x,y)$=$min\left \{x,y \right \}{1}_{(0,1)^{2}}$

Posted: 05 Jun 2021 07:42 PM PDT

$f(x,y)$=$min\left \{x,y \right \}{1}_{(0,1)^{2}}$

Help, how to integrate with respect to each variable....

$f_{X}(x)$=$\int_{0}^{1}min\left \{x,y \right \} dy $

$f_{X}(x)$=$\int_{0}^{1}y dy+\int_{0}^{1}x dy= \frac12+x$

Would this be correct?

I just want to know if my limits are correct

Find the intersection points of a cubic spline with line parallel to y axis

Posted: 05 Jun 2021 07:30 PM PDT

I have a cubic spline function, defined by vectors in R2 space $a$, $b$, $c$ and $d$, where $t$ is time:

$$f(t) = a(1-t)^3 + 3bt(1-t)^2 + 3ct^2(1-t) + dt^3$$

Given the four vectors $a, b, c, d$ we can draw a graph for $0 < t < 1$. Given an $x_1$ coordinate on that graph how could I get the $y$ coordinate of all points that have said $x$ coordinate?

Would the solution be to find the intersection between the polynomial function and line $x = x_1$? If so how would one approach solving this intersection algebraically?

Drawing Conclusions From Only Medians

Posted: 05 Jun 2021 07:18 PM PDT

At a small high school, the 12th grade is only 20 students. They all take the same five classes (English, Math, Science, History, and Latin) together. At the end of the year each student is assigned a numerical grade between 0 and 100 for each class. Obviously, to calculate their gpa for the year, a student adds their five numbers and then divides by five.

The median grade for each class is known, but nothing else. We cannot even assume that the grade distributions in each class are normal. Can one draw any conclusions about the median or mean gpa?

If concrete numbers helps, pretend that the median grades are: English: 83, Math: 84, Science: 85, History: 86, Latin: 87.

Rudin Theorem: quotient of functions

Posted: 05 Jun 2021 07:39 PM PDT

The setting for Theorem 4.4(c) in Baby Rudin is:

Suppose $E \subset X$, a metric space, $p$ is a limit point of $E$, $f$ and $g$ are complex functions on $E$, and $\lim\limits_{x \to p} f(x) = A$, $\lim\limits_{x \to p} g(X) = B$, then $\lim\limits_{x \to p} \left(\frac{f}{g}\right)(x) = \frac{A}{B}$, if $B \neq 0$.

Part (a) and (b) were addition and multiplication, but I think I understand those. I can't fully and rigorously justify (b). I need to use Theorem 4.2 which relates limits of sequences to limits of functions and the analogous properties for sequences. So I should be able to reduce this a product of sequences with $s_n =f(x_n)$ and $t_n = \frac{1}{g(x_n)}$, but I need this to be true for all $x_n \neq p$ such that $x_n \to p$, which I cannot do If $g(x_n) = 0$ anywhere.

Does anyone have tips for how to rigorously justify this?

P-Adic Expansion of Rational Number (Reverse Direction)

Posted: 05 Jun 2021 08:01 PM PDT

So I checked Method of finding a p-adic expansion to a rational number

and also P-adic expansion of rational number

Many questions here is of the form: Given a rational number, find its p-adic expansion. The solution to this is long division or some sort of geometric series argument.

What if I want to go backwards. i.e., given the p-adic expansion, what is the rational number assosciated to it? (Of course, not every p-adic expansion is rational so from reading texts, it seems the p-adic expansion of rational numbers eventually repeat so lets assume the ones we work with do)

To make this a concrete question, lets say I have like

$5^{-1} + 1 + 2*5 + 5^2 + (4*5^3 + 2*5^4 + 3*5^5 + 2*5^7 + 5^8) + (4*5^9 + 2*5^{10} + 3*5^{11} + 2*5^{13} + 5^{14}) + (4*5^{15} + 2*5^{16} + 3*5^{17} + 2*5^{19} + 5^{20})+...$

where I put the repeated parts in parenthesis. So if the first term is $5^{-1}$, lets say it is .1121(423021 repeat). How would I find the rational number associated to it?

Good book on asymptotics and especially equivalence

Posted: 05 Jun 2021 07:23 PM PDT

I'm trying to deepen my knowledge of asymptotic analysis and I find very few resources must of them just state the definition and theorems.

I'm especially looking to understand this definition more clearly $$ f(x)=g(x)(1+o(1))$$ sometines writen like this: $$u_n=(1+\epsilon(n))v_n$$

A holomorphic funtion on an open connected domain, with $f(z_0)=0$

Posted: 05 Jun 2021 07:56 PM PDT

Let $f$ be a holomorphic funtion on an open connected domain $\Omega \subseteq \mathbb{C}.$ Suppose $f$ is not the zero function. If $z_0$ is a zero of $f$, there exists a neighborhood $U \subseteq \Omega$ of $z_0$ such that $f(z)\neq0$ for all $z \in U \setminus \{z_0 \} $.

In these posts they do it using the series expansion:

But I was told to use the principle of analytic continuation; and I have not clue how to use it here.

Fixed Point Iterations on $f : \mathbb{R}^2 \rightarrow \mathbb{R}^2$

Posted: 05 Jun 2021 07:36 PM PDT

Say I have two nonlinear equations of the form $$ \begin{bmatrix} u \\ v \end{bmatrix} = f(u,v) = \begin{bmatrix} f_1(u,v) \\ f_2(u,v) \end{bmatrix}, \tag*{(1)} $$ where $u,v \in \mathbb{R}$ and I want to find fixed points $u^{\star}$ and $v^{\star}$ (assuming they exist). To compute them numerically, we can write the equations above as $$ \begin{bmatrix} u_{k+1} \\ v_{k+1} \end{bmatrix} = \begin{bmatrix} f_1(u_k,v_k) \\ f_2(u_k,v_k) \end{bmatrix}, \tag*{(2)} $$ for $k = 1,2,\dots$, and iterate until the mismatches between successive $u_k$ and $v_k$ are sufficiently small. This method results in the following pseudocode:

k = 1; converged = false;  while converged == false      u[k+1] = f_1(u[k], v[k]);      v[k+1] = f_2(u[k], v[k]);      if max(u[k+1] - u[k], v[k+1] - v[k]) < tolerance          converged = true;      end      k += 1;  end  

However, I can empirically get a faster convergence if in the calculation of $v_{k+1}$, I use the new $u_{k+1}$ instead of $u_k$. Then $(2)$ becomes $$ \begin{bmatrix} u_{k+1} \\ v_{k+1} \end{bmatrix} = \begin{bmatrix} f_1(u_k,v_k) \\ {f_2(u_{\color{red}{ k + 1 }},v_k)} \end{bmatrix}. \tag*{(3)} $$

Questions

  1. Is $(3)$ a well-defined/standard expression for what I do numerically, based on the original $(1)$? If not, is there a standard way to express this sort of "joint" iterations of two variables?
  2. More importantly, given that $f_1$ and $f_2$ are differentiable on their respective domains, if I use the Jacobian to evaluate whether $f_1$ and $f_2$ are contractions using $(1)$, does the result apply to the numerical implementation of $(3)$?

I hope MSE is the appropriate place to post this question, if not, please let me know/migrate it. Thanks!

When $k+\langle h \rangle \subseteq k[x]$ is a UFD?

Posted: 05 Jun 2021 08:00 PM PDT

Let $k$ be a field of characteristic zero and $f \in k[x]-\{0\}$.

Let $R_f=k+\langle f \rangle$.

Question 1: Is it possible to find a general form of $f \in k[x]-\{0\}$ such that $R_f$ is a UFD?

For example, $R_{x^2}=k+\langle x^2 \rangle=k[x^2,x^3]$ is not a UFD: $x \notin R_f$ and $x^6=x^2x^2x^2=x^3x^3$.

Is it true that for $f$ separable (has different roots), $R_f$ is a UFD? Probably not? What about $g=x+x^2$? $h=x^2-1$?

Question 2: Similar question for the two-dimensional case, namely, when $S_{f,y}=k+\langle f,y \rangle \subseteq k[x,y]$ is a UFD?

'Again', $S_{x^2,y}=k+ \langle x^2,y \rangle= k[x^2,x^3,y]$ is not a UFD.

Thank you very much!

Converging zeroes of a $M$-smooth function

Posted: 05 Jun 2021 07:27 PM PDT

A function $f: \mathbb{R}^p \rightarrow \Bbb R$ is $M$-smooth, i.e. $$|f(x+h)-f(x) -\langle \boldsymbol\nabla f(x), h \rangle| < M\|h\|^2 $$ Suppose, we have sequence of zeroes of $f$, given by $x_n$ converging to $x^*$, would $f$ be zero in a neighbourhood of $x^*$?

Closed-form optimal solution for box-constrained optimization problem

Posted: 05 Jun 2021 07:23 PM PDT

Consider the following problem: \begin{equation} \begin{aligned} \max_{x} & \frac{K - \sum\limits_{i=1}^N w_i x_i}{\sum\limits_{i=1}^N \frac{1}{x_i}}\\ & s.t. 0 < x_i \leq U_i, \forall i =1, 2, \cdots, N \end{aligned} \end{equation} where $K, w_i > 0$.

Can we solve this problem and get the closed form of optimal solution? Or any structure like the similar problem for the optimal solution?


I just found that I made a mistake about the denominator. It should be $\sum\limits_{i=1}^N\frac{1}{x_i}$. So sorry about that.

Syndrome computation of BCH code

Posted: 05 Jun 2021 07:56 PM PDT

I'm just trying to understand one basic property of BCH code. It was mentioned in many articles that, for binary BCH code, syndrome computation has such property: $$S_{2i+1} = (y(\alpha^{i+1}))^2=S_i^2$$, where $y(x)$ is the received codeword. However, I wonder if all the polynomial evaluation over the finite field elements under the same cyclotomic cosets can be done in a similar way. For example, for $GF(2^3)$, $\{\alpha^3,\alpha^5,\alpha^6\}$ is the coset, where $\alpha^6 = \alpha^{3*2} \ \text{and} \ \alpha^5 = \alpha^{3*2^2 mod \ 7}$. As shown above, $S_5 = S_2^2$. However, isn't that $S_4 = S_2^{2^2}$? The prove is shown as follows $$S_2^{2^2} = (y(\alpha^3))^{2^2} \\ = y(\alpha^{12 \ mod \ 7}) \\ = y(\alpha^{5}) \\ =S_4$$. Can someone helps me verify if this is true? Note: For binary BCH code, $(y(\alpha^3))^{2^2} = y(\alpha^{12})$ is valid. For the reference, this property is mentioned in this article: "Generalized Integrated Interleaved Codes" by Yingquan Wu.

Alphabetical complexity of algorithms.

Posted: 05 Jun 2021 07:54 PM PDT

I have posted elsewhere the following thought:

For first order theories, is it correct that there is a brute force algorithm that tells us the shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)?

With the answer being:


https://math.stackexchange.com/questions/4141341/problem-of-finding-shortest-proof-how-complex-is-it" There are two reasonable ways to measure the length of a proof: by steps or by symbols. For step-length, we can brute-force-search for proofs and be certain we've found the shortest one as long as our axiom system is finite - for symbol-length, we only need that the language is finite and our axiom system is computably axiomatized (e.g. PA and ZFC).

That said, in either case we have a serious issue as far as complexity is concerned: as long as our axiom system is "sufficiently rich," for every computable function F there is some theorem φ which has no proof of length ≤F(length(φ)). (Basically, we're looking at the Busy Beaver function in disguise when we talk about bounding proof searches ahead of time.)


With the above in mind, let us assume that a brute force algorithm doesn't exist that can circumvent the Busy Beaver function. Therefore, because of this does this necessarily mean that the alphabetical complexity of algorithms themselves are also unascertainable given the above?

Inequality with absolute value [Zorich's book]

Posted: 05 Jun 2021 07:31 PM PDT

Show that $|1+x|^p\geq 1+px+c_p\varphi_p(x),$ where $c_p$ is a constant depending only on $p$, $$\varphi_p(x)= \begin{cases} |x|^2, & \text{if }|x|\leq 1, \\ |x|^p, & \text{if }|x|>1, \end{cases}$$ if $1<p\leq 2$, and $\varphi_p(x)=|x|^p$ on $\mathbb{R}$ if $2<p$.

This problem is from Zorich's book and comes after the chapter "The study of functions using the methods of differential calculus". I have tried some approaches for this problem but I failed. So I'd be thankful to see the complete solution.

What is the distribution of $E[X\mid Y]$?

Posted: 05 Jun 2021 07:54 PM PDT

Let $(X, Y)$ be two r.v. with joint p.m.f. described by the following table

enter image description here

  1. What is the marginal distribution of $X$?
  2. Are $X$ and $Y$ independent?
  3. What is the conditional p.m.f. of $X$ given $Y=0$?
  4. What is the distribution of $E[X\mid Y]$?

My attempt

  1. Marginal distribution of $X$ can be found by summing the columns. So $$\mathbb{P}(X=0)=\frac{1}{15}+\frac{3}{15}+\frac{2}{15}=\frac{6}{15}$$ $$\mathbb{P}(X=1)=\frac{2}{15}+\frac{4}{15}+\frac{3}{15}=\frac{9}{15}$$
  2. They are not independent because for example, $$\mathbb{P}(X=0\cap Y=0)\neq \mathbb{P}(X=0)\mathbb{P}(Y=0)$$
  3. Conditional p.m.f. for $X$ is given by the formula $$p_{X\mid Y}(x,y)=\frac{p_{X,Y}(x,y)}{p_Y(y)}$$ Thus, $$\mathbb{P}(X=0\mid Y=0)=\frac{1}{3}$$ $$\mathbb{P}(X=1\mid Y=0)=\frac{2}{3}$$
  4. I am not sure what it means by what is the distribution of $\mathbb{E}[X\mid Y]$ I know the formula is $$\mathbb{E}[X\mid Y=y]=\sum_{x\in X(\Omega)}xp_{X\mid Y}(X\mid Y)$$ and I can use that to find it for each of $Y=0,1,2$, but how do I find the distribution?

Does $\int_0^{2\pi}\ln\left(\frac{\cos^2\phi}{C^2}\right)\ln\left(1-\frac{\cos^2\phi}{C^2}\right)\,d\phi$ have a closed form solution?

Posted: 05 Jun 2021 07:20 PM PDT

The question is in the title. I am wondering if anyone has a nice way of approaching the following definite integral

$$\int_0^{2\pi}\frac{d\phi}{2\pi} \,\ln\left(\frac{\cos^2(\phi)}{C^2}\right)\,\ln\left(1-\frac{\cos^2(\phi)}{C^2}\right)\,.$$

Here $C$ is a positive, real constant that satisfies the constraint $C>1$. So far I have tried a simple $u$ substitution, $u = \cos\left(\phi\right)/C$. However, this doesn't get me anywhere. I have also tried performing a series expansion in small $1/C$ in the second log, performing the integration, and then summing in powers of $1/C$. However, the sum does not simplify nicely.

I have also tried relating the expression to $\rm{Li}_2\left(1-\frac{\cos^2(\phi)}{C^2}\right)$ and $\rm{Li}_2\left(\frac{\cos^2(\phi)}{C^2}\right)$ and performing the integration of these polylog functions using their series representation. However I have trouble performing the summation for the $\rm{Li}_2\left(1-\frac{\cos^2(\phi)}{C^2}\right)$ term.

Is it true that if $\lim_{s \to 0} \sum_{n=0}^\infty \frac{a_n}{s^{n+1}}$ converges, then $a_n = (-1)^n a_0$?

Posted: 05 Jun 2021 07:54 PM PDT

Is it true that if $\lim_{s \to 0} \sum_{n=0}^\infty \frac{a_n}{s^{n+1}}$ converges, then $a_n = (-1)^n a_0$? The original problem was to prove the coefficients of a entire holomorphic function is of this form, which I used laplacian transform and get this equivalent condition. My attempt is quite non-rigorous, which is to use symmetry and let $$ \sum_{n=0}^\infty \frac{a_n}{s^{n+1}} = \frac{a_0 + \sum_{n=1}^\infty \frac{a_n}{s^{n}}}{s}$$ and because $\sum_{n=1}^\infty \frac{a_n}{s^{n}} = s\sum_{n=0}^\infty \frac{a_n}{s^{n+1}}- a_0$ is convergent at $0$, we can infer that $a_0 = -\lim_{s \to 0}\sum_{n=1}^\infty \frac{a_n}{s^{n}}$.

WLOG let $a_0 = 1$, and let $S_1$ denote the possible solutions for $a_1, a_2, \dots,a_n,\dots$ such that $1 = -\lim_{s \to 0}\sum_{n=1}^\infty \frac{a_n}{s^{n}}$. We choose one of them in which $a_1 = \lambda$. Now we fix $a_1 = \lambda$, note that by symmetry suppose $a_1 = -\lim_{s \to 0}\sum_{n=1}^\infty \frac{a_{n+1}}{s^{n}}$ and thus $1 = -\lim_{s \to 0}\sum_{n=1}^\infty \frac{\frac{a_{n+1}}{a_1}}{s^{n}}$, then the solution for this equation will also be the solution for the previous one. But note this is the exactly same equation above. So we choose the same solution in $S_1$ such that $\frac{a_{2}}{a_1}= \lambda$. Thus we have that if $a_1 = \lambda$, then $a_n = \lambda^n$ will be one of solutions for the original equation. Empiricial computation show that only $\lambda = -1$, $\lim_{s \to 0} \sum_{n=0}^\infty \frac{\lambda^n}{s^{n+1}}$ will converge. Thus the statement is true. Is this reasoning correct?

How many "distinct" polynomials are there mod $n$?

Posted: 05 Jun 2021 07:21 PM PDT

(This sounds like a fairly natural question, but I couldn't find it asked on this site before…)

Fix an integer $n$ as the modulus, and consider polynomials (with integer coefficients) modulo $n$.

For any polynomial $f$ and any fixed $k$, we have $f(x) + nx^k \equiv f(x) \pmod n$ for all integers $x$; thus any two polynomials with some coefficient differing by a multiple of $n$ are equivalent in the sense that they give the same value mod $n$ for any integer $x$ as input. So we can assume that each coefficient of $f$ is one of $\{0, 1, 2, \dots, n-1\}$.

When $n$ is prime, we know (from Fermat's little theorem) that $x^n \equiv x \pmod n$ for all integers $x$, which means any polynomial of degree $n$ or greater is equivalent (in the same sense as above) to a polynomial of lower degree. Even when $n$ is not prime, some such relationship probably exists; for example when $n=4$, we can see that $x^4 \equiv x^2 \pmod 4$. (I guess, though this may be wrong and I haven't checked it, that if the prime factorization of $n$ is $n = p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}$, then $x^n \equiv x^{n/(p_1p_2\cdots p_r)} \pmod n$.)

In any case, whenever $x \equiv y \pmod n$, we have $f(x) \equiv f(y) \pmod n$, so it is enough to consider the polynomial's values at $n$ consecutive integers say $\{0, 1, 2, \dots, n-1\}$. As there are at most $n$ distinct values it can have mod $n$ for each input $n$, there are at most $n^n$ distinct polynomials in this sense.

The question: How many distinct polynomials with integer coefficients are there mod $n$? Here, the meaning of "distinct" is that polynomials $f$ and $g$ are "distinct" if there exists at least one integer $x$ at which $f(x) \not\equiv g(x) \pmod n$. Or, using the terminology from what I found later (see below), we want the number of polynomial functions from $\mathbb{Z}$ to $\mathbb{Z}/n\mathbb{Z}$.


That is the question, but of course before asking it here I tried a few things to find the answer myself, including writing the following program.

Define the "signature" (just a name I made up for writing this program: is there already a standard term for this?) of a polynomial $f$ as the tuple $(f(0), f(1), f(2), …, f(n-1))$ of its values at all $n$ points. We can count the number of distinct polynomials mod $n$ by keeping track of all the encountered signatures, as we start with the zero polynomial and successively add $ax^k$ for each $a$ from $0$ to $n-1$, for increasing $k$ until $x^k$ has the same signature as some already seen polynomial.

#!/usr/bin/env python3    # Adding the signatures of two polynomials mod n  def add(t1, t2):      n = len(t1)      assert n == len(t2)      return tuple((t1[i] + t2[i]) % n for i in range(n))    def count(n):      szero = tuple([0]*n)  # The signature of the zero polynomial      seen = {szero}  # Only the zero polynomial has been seen      k = 0      sxk = tuple([1]*n)  # The signature of x^k      while True:          news = set()  # New signatures, by adding ax^k for some a          saxk = szero          for a in range(1, n):              saxk = add(saxk, sxk)  # The signature of a*(x^k)              for s in seen:                  newp = add(saxk, s)  # a*(x^k) + (smaller polynomial)                  if newp not in seen: news.add(newp)          seen.update(news)          # Now done with all polynomials of degree k          k += 1          sxk = tuple((sxk[i]*i) % n for i in range(n))          if sxk in seen: break      return len(seen)    print('n', '\t', 'count(n)')  for n in range(1, 11):  # Gets very slow for n=11      print(n, '\t', count(n))  

This prints:

n    count(n)  1    1  2    4  3    27  4    64  5    3125  6    108  7    823543  8    1024  9    19683  10   12500  

The good thing about computing these small values is that there is the OEIS, using which you can, to quote Donald Knuth, "compute your way into the literature". And accordingly, searching the OEIS for this sequence $1$, $4$, $27$, $64$, $3125$, $108$, $823543$, $1024$, $19683$ gives A058067 described as "Number of polynomial functions from Z to Z/nZ" which is exactly our question. We also learn from that page that there's a formula

$$ \prod_{k=0}^{n-1} \frac{n}{\gcd(n, k!)} $$

that fits the numbers we computed and (per that page) was first proved by Aubrey J. Kempner in 1921 in a two-part paper called "Polynomials and their residue systems" (1, 2), and there's what appears to be a nice paper in 2000 by Manjul Bhargava called "The Factorial Function and Generalizations" PDF that generalizes it (and maybe gives a shorter proof).

But I haven't read either of these papers as I ran out of my time budget for this (well, I exceeded it by writing up this question), and I hope someone can give a clear proof/derivation of the formula either using those papers or independently.


Edit: I read the latter paper and understood it enough to answer this question (left an answer below), but I have a follow-up question: Smallest number of residue classes for a polynomial function mod $n$ that separates $0$.

Map preserving joins jointly versus separately

Posted: 05 Jun 2021 07:18 PM PDT

Let $X$, $Y$ and $Z$ be complete lattices, and let $f:X\times Y \to Z$ be a function. Are the following conditions equivalent?

  • $f$ preserves joins as a map from the product lattice $X\times Y$ to $Z$;
  • $f$ For each $x\in X$, the map $f(x,-):Y\to Z$ preserves joins, and for each $y\in Y$, the map $f(-,y):X\to Z$ preserves joins.

This is a sort of analogue of the question of whether a function on a product space is continuous iff it is separately continuous in each variable (where the answer is no). In some sense, this is asking whether you can "build" all the joins in the product out of joins where you only vary one coordinate at a time.

By the way, I'm talking about possibly infinite joins, not only finite. A reference would also be welcome.

Find the shortest distance from the origin to the hyperbola $x^2+8xy+7y^2=225$

Posted: 05 Jun 2021 08:03 PM PDT

Find the shortest distance from the origin to the hyperbola $x^2+8xy+7y^2=225$

i know that $$d(x_0, x) = \sqrt{(x-x_0)^2+(y-y_0)^2+(z-z_0)^2}$$

I also found this formula in my notes $$ d(x_0,p) = \frac{|ax_0+by_0+cz_{0}-c|}{\sqrt{a^2+b^2+c^2}}$$

I just haven't seen it been applied in class so i'm a bit confused.

This was in a week we were learning about Lagrange Multipliers and we don't seem to be given a constraint.

Find all unit vectors in the plane determined by vectors u and v that are perpendicular to the vector w.

Posted: 05 Jun 2021 08:03 PM PDT

Find all unit vectors in the plane determined by vectors $u=(0,1,1)$ and $ v=(2,-1,3)$ that are perpendicular to the vector $w=(5,7,-4)$.

This is the question. I found the plane that determined by $u$ and $v$, its equation is $4x+2y-2z=0$ I think. What should I do next, how can I find a relation between my plane and w?

No comments:

Post a Comment