Saturday, October 2, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


How would you calculate the average number of of probabilistic terms to pass a summative threshold?

Posted: 02 Oct 2021 08:07 PM PDT

I came up with this question in a situation where, for example, the threshold is 100, with two scenarios. In 1, each term contributes 25 to passing the threshold, with a 50% chance of adding 100. In 2, each term contributes 50, with a 25% chance of adding 100. In the two scenarios, the average amount added is equal:

$$0.5(100)+0.5(25) = 0.75(50)+0.25(100) = 62.5$$

but I realized the average terms before passing the threshold is unequal:

$$0.5(1)+0.25(2)+0.125(3)+0.125(4) = 1.875 ≠ 0.25(1)+0.75(2) = 1.75$$

The methodology for the second calculation gets a lot more complex and tedious when you have a higher degree of granularity and irregularity, so I was wondering how you would calculate the number of terms to pass the threshold at more complex scales, especially those with more variables.

Find the two last digits of $[(29+\sqrt{21})^{2000}]$.

Posted: 02 Oct 2021 08:04 PM PDT

Let $n=\left[(29+\sqrt{21})^{2000}\right]$. I want to find the two last digits of $n$. I known that, $S_k=(29+\sqrt{21})^{2000}+(29-\sqrt{21})^{2000}$ is integer, but from this I could not find the integer part of $(29+\sqrt{21})^{2000}$.

How to find them?

Are the Peano Axioms necessary to prove $\forall{x}\exists{y}\ x<y$ (LPL 16.41)

Posted: 02 Oct 2021 08:00 PM PDT

[This is problem 16.41 in Barwise & Etchemendy's "Language, Proof, and Logic".]

The only premise given is $\forall{x}\forall{y}\ (x<y \iff \exists z (x+s(z)=y))$

From this, it asks for a formal proof of: $\forall{x}\exists{y}\ x<y$

This is easy if we can appeal to the Peano Axioms; however, none of them listed as premises in this exercise.

The instructions for this section say: "Use [proof software] Fitch to construct formal proofs of the following theorems from the Peano Axioms plus the definition of <. In the corresponding Exercise file, you will find as premises only the specific axioms needed to prove the goal theorem."

Moreover, on 16.41 there is an explicit hint: "Notice that the Exercise file contains the definition of < but none of the Peano Axioms! So although this follows from Exercise 16.40 [$\forall x x<s(x)$], you won't be able to use your proof of that that[sic] as a lemma."

I'm stumped and confused. I apparently can't use any Peano Axioms, but if that's true, how am I supposed to use the given premise containing the successor function? Don't we need at least a few PA to define how that object behaves under addition?

Is this proof possible as described, or what am I missing? Hints only, no solutions, please!

If a and c are odd prime numbers and ax^2+bx+c=0 has rational roots ,where b belongs to Z.prove that one root of the equation is independent of a,b,c.

Posted: 02 Oct 2021 07:47 PM PDT

For rational roots discriminant should be a perfect sq.

$b^2-4ac=p^2$(say $p$)

$(b+p)(b-p)=4ac$

I tried to make cases as $b+p=2a,b-p=2c$(as both should be even) and I can prove it,but what if I took $b+p=2$ and $b-p=2ac$ or say $b+p=2ac$ and $b-p=2$.Is it right?I am confused about its validity and how to proceed?Would you help?

For $A\in\mathbb{R}^{m\times n}$, prove that $\mathcal{R}(A^+) = \mathcal{R}(A^T)$.

Posted: 02 Oct 2021 07:42 PM PDT

For $A\in\mathbb{R}^{m\times n}$, prove that $\mathcal{R}(A^+) = \mathcal{R}(A^T)$.

Attempt: Let $x\in\mathcal{R}(A^+)$, then $A^+y = x$ for $y\in\mathbb R^m$. But since $\mathcal{R}(A^T) = \mathcal N(A)^\perp \implies x = A^+y \in\mathcal N(A)^\perp = \mathcal{R}(A^T)$.

Let $x\in\mathcal R(A^T) = \mathcal N(A)^\perp$. Then $A^Ty = x$ for $y\in \mathbb R^m$. I am not really sure how to proceed from here. Anything would be great!

Note that $\mathcal R(A^T) = \mathcal N(A)^\perp$ and $\mathcal R(A)^\perp = \mathcal N(A^T)$ and referencing this:

enter image description here

and

enter image description here

Analytic Solution to this PDE

Posted: 02 Oct 2021 07:37 PM PDT

I've been trying to find an analytical solution for the following cylindrical diffusion equation:

$$\frac{\partial V}{\partial t} = k \frac{\partial^2 V}{\partial r^2} + \frac{k}r \frac{\partial V}{\partial r} - \frac{P\mu_C}{r\rho}$$

BC's:

$$V(0, t) = C_1$$ $$V(r_{cr}, t) = 0$$

IC:

$$V(r,0)=0$$

where $P, \mu_c, \rho$, and $k$ are constants. I have tried using Bessel functions to find a solution but I think it is beyond my ability.

Any help is much appreciated. Thanks.

Conditions for rational function oblique asymptote (formal)

Posted: 02 Oct 2021 07:34 PM PDT

The graph $y = f(x)$ has an oblique asymptote along the line $y = mx + b$ (with $m \not= 0$) if $$ \displaystyle\lim_{x \rightarrow \infty} |f(x) - (mx+b)| = 0. $$

How do you fully describe algebraically, the conditions for a rational function $\dfrac{f(x)}{g(x)}$ to have a slant asymptote, where $f$ and $g$ are both polynomials.

Thanks!

Proving $\int_{-1}^1 \frac{1}{|x-y|^{2/3}} \frac{1}{|x|^{1/3}} dx \approx -2\log y$ as $y\to 0$

Posted: 02 Oct 2021 07:55 PM PDT

I want to show that $$\int_{-1}^1 \frac{1}{|x-y|^{2/3}} \frac{1}{|x|^{1/3}} dx \approx -2\log y$$ as $y\to 0$. Here, $f(y) \approx g(y)$ means that $\lim_{y\to 0} \frac{f(y)}{g(y)}=1$.

By computing the integral and finding asymptotic expression using Mathematica, I think the above is true. (The exact value of the integral is a very complicated function involving hypergeometric functions.) But, how can I prove this asymptotic expression in a clever way?

how to prove that a subset of the complex set is Path-connected

Posted: 02 Oct 2021 07:26 PM PDT

Given a subset of the Complex set, how do you prove that it is path connected?

For example, if you are given the subset S = {a < arg(z) < b, with 0 < a < b < 2π} how do you show that it is path connected?

I know, that I need to show that there exists a path from one arbitrary point to another arbitrary point, say a path from z1 to z2, but I am not sure how to do it. Are there any special ways of how to go about this problem?

How to prove Lipschitz functions are not Lipschitz in the $p$-variation metric?

Posted: 02 Oct 2021 07:27 PM PDT

Let $V: [0, T] (\subset \mathbb{R}) \to \mathbb{R}$ be Lipschitz continuous, i.e. $|V(x)-V(y)| \leq K | x - y |$ for all $x,y$ in $[0, T] $. Define the $p$-variation norm by:

$$|| V||_p = \left(\sup_{D \subset [0,T]} \sum_{j \in D}\lvert V(t_j)-V(t_{j-1}) \rvert ^p\right)^{\frac{1}{p}}$$

Here, $D = \{t_0, t_1, \ldots, t_n\}$, with $t_0=0$, $t_n=T$ and $t_j > t_{j-1}$ is generally referred to as a partition of the interval $[0,T] $.

Can you find a counterexample to the claim that $V$ being Lipschitz implies the $p$-variation of the difference satisfies a Lipschitz condition, as below?

$$ ||V(x)-V(y)||_p \leq \tilde{K} || x - y ||_p $$

Some thoughts

  • Another way of phrasing this is to say: can you find an example of a function that is Lipchitz with respect to the Euclidean distance but not with respect to the $p$-variation distance?
  • Apparently, it's a well-known fact that this is false. In fact, I have heard from experts that this has been disproven,apparently in this paper, I could not find where in this paper, however.
  • if $V $ were linear, this would be true, so from what I have been told, a possibly good idea would be to make the non-linear part of $V $ more explicit by writing:

$$V(x) - V (y) =: g(x,y) (x-y)$$

Three recurrence relations of orthogonal polynomial, monic and orthonormal polynomial

Posted: 02 Oct 2021 07:21 PM PDT

  1. $\{p_n\}$ is orthogonal polynomial which satisfies

$$p_{n+1} = (A_n x +B_n) p_n +D_n p_{n-1}$$

  1. whose monic form satisfies

$$x p_n(x) = p_{n+1}(x) + b_n p_n(x) + d_n p_{n-1}(x),$$

  1. whose orthonormal form satisfies

$$x p_n(x) = E_n p_{n+1} + F_n p_n(x) + E_{n-1} p_{n-1}(x).$$

Now the problem is how to get the other two from one? What's the relation among the coefficients?

Finding number of subsets that describe a basis for a basic solution

Posted: 02 Oct 2021 07:25 PM PDT

Given the following constraints,

\begin{equation} \begin{split} x_1&+&x_3\le10 \\ -x_1&+&x_3\le0\\ x_2&+&x_3\le10\\ -x_2&+&x_3\le0\\ x_1,x_2&,&x_3\ge0\\ \end{split} \end{equation} Here is what the constraints look like

I want to find the number of subsets that describe a basis for which (10,10,0) is a basic solution.

It is given that S = {x1, x2, x3, s1, s2, s3, s4} and that there are 7C4=35 possible ways to select a subset of 4 distinct variables from S.

Any hints?

Tangent cone: don't understand

Posted: 02 Oct 2021 07:19 PM PDT

A cone is a set that is closed under multiplication by positive scalars. The The tangent cone of $f$ at $x$ is defined as the set of descent directions of $f$ at $x$ $$\mathcal{T}_f(\boldsymbol{x})=\{\boldsymbol{u}\in\mathbb{R}^n: f(\boldsymbol{x}+t\cdot\boldsymbol{u})\le f(\boldsymbol{x}) \ \text{for some}\ t>0 $$.

The problem is, how can I get the tangent cone of $f(\boldsymbol{x}=||\boldsymbol{x}||_1)$ at point $\boldsymbol{x}=(1,0)$?

enter image description here

Asymptotic expansion of integral with Hankel function

Posted: 02 Oct 2021 07:17 PM PDT

I'm considering about the leading term in the asymptotic expansion of the following integral as $k\to\infty$ $$ I_k(x,z) = \int_{\mathbb{R}} S(x-y) H_0^{(1)}\left(k\sqrt{y^2+z^2}\right) \mathrm{d} y $$ with $S:\mathbb{R}\to\mathbb{R}$ being a smooth and compactly supported function. $H_0^{(1)}$ is the zero order Hankel function of the first kind.

This integral comes from the solution $u(x,z)$ to 2D Helmholtz equation with source concentrated on the line $\{z = 0\}$. $$ \Delta u + k^2 u = S(x)\delta(z) \quad (x,z)\in\mathbb{R}^2 $$

My intuition is that it would be something like $$ I_k \sim \frac{c}{k^p}e^{\mathrm{i} k |z|} S(x) + \mathcal{o}\left( \frac{1}{k^p} \right) $$ for some $p>0$, but I'm not sure how to derive it. Any hint will be appreciated.

Continuity and metric space

Posted: 02 Oct 2021 07:21 PM PDT

Let $X$ be a metric space with metric $d$.

  1. Show that $d: X \times X \rightarrow \mathbb R$ is continuous
  2. Let $X'$ denote a space having the same underlying set as $X$. Show that if $d: X' \times X' \rightarrow \mathbb R$ is continuous, then the topology of $X'$ is finer that the topology of $X$.

I know how to approach question 1 (I think I am doing it right):

Let $(x, y) \in X \times X$; We want to prove that given $\epsilon \gt 0$, there exists opensets $x \in U$ and $y \in V$ such that $$d\left(U, V\right) \subset \left(d\left(x, y\right) - \epsilon, d\left(x, y\right) + \epsilon\right)$$ Let $U = B_d(x, \frac \epsilon 2)$ and $V = B_d(y, \frac\epsilon 2)$, then $\forall (x', y') \in U \times V$, we have: $$d(x,y) - \epsilon\leq d(x, y) - d(x, x') - d(y, y')\leq d(x', y') \leq d(x, x') + d(x, y) + d(y, y') \leq d(x, y) + \epsilon$$ Therefore we have $d\left(U, V\right) \subset \left(d\left(x, y\right) - \epsilon, d\left(x, y\right) + \epsilon\right)$, which means it's continuous.

However, I don't know how to approach the second question. Aren't this trivial given the fact that $X'$ has the same underlying set as $X$? Because then for any opensets in $X$, it is automatically in $X'$. Why $d$ still have to be continuous?

I Need a help to solve an arithmetic progression question

Posted: 02 Oct 2021 08:04 PM PDT

If a number sequence is $3 , 5 , 9 , 11 , 15 , 17 , 21$ and so on, and the difference between each terms of the sequence is $2, 4, 2, 4, 2, 4$ and so on, what is the $21$st term of the sequence? Please help me solve the question.

Equation to accelerate an object along a known curve until it reaches a target distance

Posted: 02 Oct 2021 08:07 PM PDT

This problem is deceptively simple, but it's been driving me mad:

I have a function, $f(x)$. At $f(0)$, it returns $0$ -- at $f(1)$, it returns $MaxSpeed$.

I have an object at rest and a target point 1 KM away. My goal is to move the object along a single dimension such that its speed always matches the output of that function. For example, when it's halfway there I'd expect $Speed = f(0.5)$ and when it hits the 1 KM goal, it would have $Speed = f(1) = MaxSpeed$.


My problem: $f(0)$ (initial spot for the object) gives a speed of $0$... so the object never moves. Giving the feedback loop a kick using something like Speed = f(0.001) does eventually do the proper thing, but it takes ages.

My gut says that I need to bring time into the problem somehow... but I'm fairly certain I can choose any 2 of "reasonable time," "reaches speed exactly at given distance," or "acceleration is determined by a function" and not all 3. Any ideas?

Logarithms: Finding the exponent given a base & an argument

Posted: 02 Oct 2021 07:33 PM PDT

My question is this: In a logarithm, given a base and an argument, how do you derive the exponent (without counting, trial and error or educated guessing)?

e.g.

log2 (2048) = ?

If:

log𝑥(𝑦) = e

I would like to know the algebra/formula for deriving the exponent(e) when you plug in a base(x) and an argument (y).

What I am seeking is not an algorithm but rather a formula, if there is one.

Thank you for reading

expected value of geometric distribution using "first-step analysis"

Posted: 02 Oct 2021 07:35 PM PDT

How does this work? I'm particularly confused by "we've wasted one toss and are back to where we started." Are we back to where we started though? $E(X)$ (on the LHS of the equation) is the expected number of failures before the first success. But then we have "$(1+E(X))$ which is strange because why are we looking at $E(X)$ if we already looked at the first toss? Shouldn't it be $1 + E(X-1)$? Thank you very much!

enter image description here

Let $\Bbb{Q} \cap [0,1] = \{q_1, q_2, \cdots\}$, $U = \bigcup(q_n - 1/2^{n+4}, q_n + 1/2^{n+4})$. Prove $I_{\partial U}$ is not Riemann integrable

Posted: 02 Oct 2021 07:40 PM PDT

Let $\Bbb{Q} \cap [0,1] = \{q_1, q_2, q_3, \cdots \}$, and $U = \bigcup_{n=1}^{\infty} (q_n - 1/2^{n+4}, q_n + 1/2^{n+4})$. Shows that $I_{\partial U}$ defined by $I_{\partial U}(x) = 1 $ if $x \in \partial U$ otherwise $0$ is not Riemann integrable.

I got that $[0, 1] - U \subset \partial U$. Hence It's enough to show that $I_{[0, 1] - U}$ is not Riemann integrable. But I still got stuck.

The number $4$ is irrelevant here. I just take some number such that $U$ does not cover $[0, 1]$.

Edit: It occurs to me that using the theorem: "Riemman integrable <=> measure of discontinuity point = 0" solves the problem. But is there anyway to prove it directly?

I tried to prove that the upper riemann sum

$$ \lim_{N \rightarrow \infty} \sum_{n \in \Bbb{Z} \text{ and }[\frac{n}{2^N}, \frac{n+1}{2^N}) \cap \partial U \ne \varnothing} \frac{1}{2^N} $$

is bigger than $0$ directly but stuck.

Edit again:

Accidently I found out that it's actually very easy. Suppose $I_{\partial U}$ is Riemann integrable. Then $\int_{\Bbb{R}}I_{\partial U}dx = 0$, since it's lower Riemann sum is $0$. It means Its Lebesgue measure $m(\partial U) = 0$. But also we know $[0, 1] - U \subset \partial U$. Hence $$m(\partial U) \geq m([0, 1] - U) = 1 - m(U) \geq 1 - \sum_{n=1}^{\infty}1/2^{n+3} = 7/8.$$ It contradicts with $m(\partial U) = 0$.

Smallest eigenvalue of a sum of two squared GOE matrices

Posted: 02 Oct 2021 07:57 PM PDT

Consider two iid $N \times N$ GOE random matrices1 $A$ and $B$.

Let $M = A^2 + g B^2$ for $g>0$, and let $\lambda_\min(M)$ denote the smallest eigenvalue of $M$.

What is the expected value of the smallest eigenvalue of $M$ in the limit of large $N$, i.e. $\overline{\lambda_\min} : = \lim_{N \to \infty} \langle\lambda_\min(M)\rangle$

Numerics indicates that the limit is finite, and satisfies $\overline{\lambda_\min} = O(\min(g,1))$.

A sub case of interest is the asymptotic scaling of $\overline{\lambda_\min} $ in the limit of small $g$. In this limit we have the upper bound $\overline{\lambda_\min} \leq g \langle v^T B^2 v \rangle = g$ where $v$ is the normalised eigenvector associated to the smallest eigenvalue of $A^2$. I suspect that this may be tight, $\overline{\lambda_\min} \sim g$, but it is not clear to me.


1. Specifically, the matrix elements $A_{ij}$ are multivariate gaussian distributed real numbers, with mean $\langle A_{ij} \rangle = 0$ and covariance $\langle A_{ij} A_{nm} \rangle = N^{-1}(\delta_{in}\delta_{jm} + \delta_{im}\delta_{jn}) $ and similarly for $B$)

Finding an intermediate subgroup

Posted: 02 Oct 2021 07:29 PM PDT

Given three finite abelian group with strict inclusion $$A'\subsetneq A''\subsetneq A,$$ is there a "canonical" way to find a subgroup $H$ with $A'\subset H\subset A$ such that, $A''\cap H=A'$ and $A/A''\simeq H/A'$?

I said a "canonical" way since I think using classification of finite abelian groups, all groups involved can be written as forms $$\bigoplus_{i=1}^k\mathbb Z/n_i\mathbb Z$$ and probably we can always find such a group by playing around $n_i$ and $k$. However, it is a bit too "artificial" to me. I am wondering if there is an "abstract" way to construct or prove the existence of such a group without using that classification.

Selection rules: Why is $\frac1m\sum_g\sum_{\lambda}^{\oplus} \underline{\underline{I}}_{\,\lambda}\otimes\underline{\underline{D}}^{(\lambda)}(g)=0$?

Posted: 02 Oct 2021 07:42 PM PDT

Here is the derivation from some notes given to me. I uploaded these handwritten notes for 2 reasons:

  1. To show you that this is the only source of information I have available to me (and it's incredibly difficult to learn from it).
  2. In the hopes that you can make more sense of them than I can.

There are two expressions in these notes for which I would like to understand, which I have marked with red question marks above the relevant expressions.

Selection rules derivation part 1

Selection rules derivation part 2

For the first expression,

$$\frac1m\sum_g\underline{\underline{D}}^{(\lambda)}(g)=\begin{cases} 0, & \text{for rest of $\lambda$'s} \\ 1, & \text{$\lambda$ for fully symmetric IRREP} \end{cases}$$ The $\underline{\underline{D}}^{(\lambda)}(g)$ in this expression (I think) is supposed to represent the triple direct product at the top of the page, namely, $$\underline{\underline{D}}^{(\lambda)}(g)\equiv\Big[{\underline{\underline{D}}^{(1)}}^*\otimes {\underline{\underline{D}}}^{\prime} \otimes {\underline{\underline{D}}^{(2)}}\Big]_{n,\,\beta,\,p,\, i,\,\alpha,\,j}$$ This $\frac1m\sum_g\underline{\underline{D}}^{(\lambda)}(g)$ as I understand it is not an orthogonality expression (but it should be), else how can it possibly be equal to zero or $1$?


While in the middle of constructing this question I was able to find a PDF version of a book$^{\large\zeta}$ that has a specific chapter that derives selection rules in a remarkably similar way to the notes I have above:

Selection rules first page Selection rules second page Selection rules last page

I would like to know why "We note that the sum of matrices of any irreducible representation, other than the identity representation, over the group, is equal to the zero matrix (see Exercise 3.7)". Which is written in the paragraph underneath equation $(21.7)$ above. This exercise 3.7 has been asked about and answered here which I found earlier and put a comment below the question to indicate the source. User @Gerry Myerson answered that question, but I am unable to understand his proof.

So in summary, I would like to know why $\sum_g\sum_{\lambda}^{\oplus} \underline{\underline{D}}^{(\lambda)}(g)=0?\tag{21.9}$

Or, if you prefer, could someone please prove why the sum over a group of the matrix elements of any irreducible representation other than the identity/fully symmetric/trivial representation is equal to zero?


$^{\large\zeta}$The textbook pages 264-266 embedded in this question as images are from "Applications of group theory in quantum mechanics" by Petrashen & Trifonov.

Prove $ \sum_{n=1}^{2q-1}\frac{n}{q}\sin\left(\frac{\pi np}{q}\right)=-\cot\left(\frac{\pi p}{q}\right)-\csc\left(\frac{\pi p}{q}\right) $

Posted: 02 Oct 2021 07:25 PM PDT

For $p$ and $q$ are positive integers, $p < q$.

How to prove this identity? $$ \sum_{n=1}^{2q-1}\frac{n}{q}\sin\left(\frac{n\pi p}{q}\right)=-\cot\left(\frac{\pi p}{q}\right)-\csc\left(\frac{\pi p}{q}\right)=-\cot\left(\frac{\pi p}{2q}\right) $$ This identity is comprised by two parts. $$ \sum_{n=1}^{q-1}\frac{2n}{q}\sin\left(\frac{2n\pi p}{q}\right)=-\cot\left(\frac{\pi p}{q}\right)\\ \sum_{n=1}^{q}\frac{(2n-1)}{q}\sin\left(\frac{(2n-1)\pi p}{q}\right)=-\csc\left(\frac{\pi p}{q}\right) $$ I found these identities during the calculations when proving the other identity in my previous question by comparison. I am curious how to prove the identities in general ways.

I also found some other identities all by comparison like:

For $q$ is odd $$ \sum_{n=1}^{(q-1)/2}\frac{2n}{q}\sin\left ( \frac{2n\pi p}{q} \right )=\frac{(-1)^{p-1}}{2}\csc\left( \frac{\pi p}{q} \right ) \\ \sum_{n=1}^{(q-1)/2}\frac{(2n-1)}{q}\sin\left ( \frac{(2n-1)\pi p}{q} \right )=\frac{(-1)^{p-1}}{2}\cot\left( \frac{\pi p}{q} \right ) \\ \sum_{n=1}^{(q-1)/2}2\sin\left ( \frac{2n\pi p}{q} \right )=\cot\left( \frac{\pi p}{q} \right )+(-1)^{p-1}\csc\left( \frac{\pi p}{q} \right ) \\ \sum_{n=1}^{(q-1)/2}2\sin\left ( \frac{(2n-1)\pi p}{q} \right )=\csc\left( \frac{\pi p}{q} \right )+(-1)^{p-1}\cot\left( \frac{\pi p}{q} \right ) $$ For $q$ is even $$ \sum_{n=1}^{q/2}\frac{2n}{q}\sin\left ( \frac{2n\pi p}{q} \right )=\frac{(-1)^{p-1}}{2}\cot\left( \frac{\pi p}{q} \right ) \\ \sum_{n=1}^{q/2}\frac{(2n-1)}{q}\sin\left ( \frac{(2n-1)\pi p}{q} \right )=\frac{(-1)^{p-1}}{2}\csc\left( \frac{\pi p}{q} \right ) \\ \sum_{n=1}^{q/2}2\sin\left ( \frac{2n\pi p}{q} \right )=\cot\left( \frac{\pi p}{q} \right )+(-1)^{p-1}\cot\left( \frac{\pi p}{q} \right ) \\ \sum_{n=1}^{q/2}2\sin\left ( \frac{(2n-1)\pi p}{q} \right )=\csc\left( \frac{\pi p}{q} \right )+(-1)^{p-1}\csc\left( \frac{\pi p}{q} \right ) $$ How to prove them in general ways?

Probability of getting into my favorite PhD

Posted: 02 Oct 2021 07:28 PM PDT

Suppose there are $n$ potential grad students and $n$ Universities. Each University has one scholarship to do a PhD. Each student has a strict ranking of Universities so that each University is comparable and she is not indifferent between any two Universities. The preferences of each student are independent and generated uniformly at random.

Students are ordered according to their height (or some other irrelevant attribute) and can choose their favourite University to study in based on this order. Universities accept the first student who applies to them and their decision is final and irrevocable.

My question is: for the $k$ tallest applicant, what is the probability that they end up in their $j$-th best University?

For example, for the tallest guy, the probability that she gets in her first-choice is 1, and 0 for any other University.

For the second tallest guy, the probability that she gets in her first choice is $\frac{n-1}{n}$, into her second choice is $1/n$ (i.e. that the tallest guy chose her favourite school), 0 for any other Unversity.

For the third tallest guy, the probability that she gets her top choice is $\frac{n-1}{n} \times \frac{n-2}{n-1}$ and the second-best choice equals $\frac{2}{n}$, and so on.

Many thanks in advance.

Edit: I'm thinking for the k-th tallest applicant, the chance of getting into her top choice is $\frac{n+1-k}{n}$.

Second edit: Suppose now that, if a student gets her first choice, she completes her PhD with probability 1, if she gets her second choice, she does with probability 1/2 and so on, so in general, if a student gets her k choice, she finishes her PhD with probability $\frac{1}{2^{k-1}}$. What is the expected number of students that fail to complete their PhD? Simulations tell me it is $0.38 n$, but I would like to understand why.

Intuition Regarding the Derivates of exp(-1/x^2) About the Origin

Posted: 02 Oct 2021 08:07 PM PDT

Every derivative of $e^{(-1/x^2)}$ at the origin is zero. Yet, this is not a constant function. Clearly, the limit as x diverges to either side is 1, yet the function at 0 is 0 by continuous extension.

My intuition is that if every derivative is zero, then the function can never change. Why is my intuition wrong?

Proof of Discontinuity Criterion for functions

Posted: 02 Oct 2021 08:07 PM PDT

I would really appreciate a proof for the Discontinuity Criterion Theorem for functions. It is stated as such...

Let $A$ be a subset of $\mathbb{R}$, let $f: A \to \mathbb{R}$ and let $c \in A$. Then $f$ is discontinuous at $c$ iff there exists a sequence $x_n \in\mathbb{R}$ such that $x_n$ converges to $c$ but the sequence $f(x_n)$ does not converge to $f(c)$.

Thank you!

Are there 2D analogues for integer division and modular arithmetic?

Posted: 02 Oct 2021 07:55 PM PDT

Let's say you have a "parallelogram" of points $P = \{(0, 0), (0, 1), (1, 1), (0, 2), (1, 2)\}$. This parallelogram lies between $u = (2, 1)$ and $v = (-1, 2)$.

Then for any point $n \in \mathbb{Z}^2$, you can write

$n = dQ + r$,

where $Q = \begin{pmatrix} u\\ v\end{pmatrix} = \begin{pmatrix}2 &1 \\ -1 &2\end{pmatrix}$, and $r \in P$.

As an example, I try to show how these things relate in the following image:

Division

$$(3, 5) = (2, 1)\begin{pmatrix}2 &1 \\ -1 &2\end{pmatrix} + (0, 1)$$

Here is another image, showing that $Q$ need not be in the form $\begin{pmatrix}a & -b \\b&a \end{pmatrix}$.

enter image description here

$$Q = \begin{pmatrix}-4 & 2 \\ 1 & 2\end{pmatrix}$$

My question is: is type of thing studied (for general parallelograms), and what is it called?

I have many questions about it, and do not know where I can find more information. For example: how do you find $d$ and $r$, does it make sense (and what is the use) of defining $n_1 \equiv n_2 \pmod Q$ if $n_1$ and $n_2$ have the same "remainder" when "divided" by $Q$, and if so, what rules govern this system; what is the situation on a hexagonal lattice; what if $P$ is not a parallelogram.


Background

The reason I'm interested in this is that I use it in a large variety of algorithms, including sub-sampling of grids, maze generation, and using grids to represent more complicated things (for example, using a hex grid to represent a triangular grid + vertices).

Although I have ways to calculate $d$ and $r$ given $n$ and $Q$, my algorithms are rather clumsy, and "hacked together". I'd like to back this with some theory, perhaps simplify and optimize the algorithms, and be in a better position to design related algorithms and be able to justify it mathematically.

Integral $I=\int_0^\infty \frac{\ln(1+x) \operatorname{Li}_2 (-x)}{x^{3/2}} dx$

Posted: 02 Oct 2021 08:04 PM PDT

Hello can you please help me solve this integral $$ \int_0^\infty \frac{\ln(1+x) \operatorname{Li}_2 (-x)}{x^{3/2}} dx=-\frac{2\pi}{3}(\pi^2+24\ln 2). $$ I am trying to work through all logarithmic integrals. Note, the Polylogarithm function is given by $\operatorname{Li}_2(-x)$ and is defined by $$ \operatorname{Li}_2(-x)=\sum_{k=1}^\infty \frac{(-x)^k}{k^2}, \ |-x|<1 $$ and can be extended using analytical continuation for $|-x|>1$. We also know that $$ \frac{d}{dx} \operatorname{Li}_2(-x)=-\frac{\ln(1+x)}{x}. $$Thanks!

No comments:

Post a Comment