Thursday, July 21, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


solve equation with power and square root at the same time

Posted: 19 Jul 2022 11:54 PM PDT

i find it quite difficult to solve this simple equation, you find examples for sub-problems, but i did not find for both problems a solution (convex and concave functions):

$(1+0.05)^t -1$ = $\sqrt(t)*0.1$

with Excel, i can prove that $t$ = 3.52785480404031, but i am not able to solve this step by step, so that i rearange the formula to $t$ = ....

f(1) = f(2)

On the diophantine equation: $ \frac{x^2}{a^2}+\frac{y^2}{b^2} = k $. Is there already a known solution?

Posted: 19 Jul 2022 11:51 PM PDT

I manage to find a method to compute integer solutions of the equation: $$ \frac{x^2}{a^2}+\frac{y^2}{b^2} = k $$ where $$(a,b)\in\mathbf{Z}\times\mathbf{Z}$$ and $$k\in\mathbf{N}$$ I would like to know if there is already a known solution to this problem or if I just proved a new method to compute those solutions.

Understanding why $\mathbb{E}(f\mid A)=Pf=\int_Xfd\mu$ in a probability space $(X,F,\mu)$ given a sub-sigma algebra $A$ of sets of measure 0 or 1

Posted: 19 Jul 2022 11:47 PM PDT

Let $(X, F, \mu)$ be a probability space and suppose that $A$ is a sub-sigma-algebra of $F$ such that $\forall S \in A: \mu(S)\in \{0, 1\}$. Let $f \in L^2(X,F,\mu)$ and consider the conditional expected value $g \equiv \mathbb{E}(f\mid A)$. In general (for any sub sigma-algebra $A$ of $F$) we know that the conditional expected value satisfies $\mathbb{E}(1_Ag) = \mathbb{E}(1_Af) = \int_X1_Afd\mu = \int_Afd\mu$. I am trying to understand how come if $A$ consists of those sets with measure zero or one, then why is $g = \int_Xfd\mu$?

The context for this question is Lemma 3 of a blog post discussing Birkhoff's and von Neumann's ergodicity theorems. In particular I am interested in understanding von Neumann's theorem and while I am aware of the fact that the operator version of the theorem give the $L^2$ version (as conditional expectation is an ortogonal projection), the details of why the trivialish sub-sigma algebra $A$ results in $g \equiv \int_Xfd\mu$ is not entirely clear to me: Intuitively it does make some sense to me, because if we are restricting ourselves to only sets with measure 0 or 1, then have pretty as much information as we can have w.r.t. the probability measure $\mu$. But this analogue does not constitute to a good proof.

turbulent kinetic energy in vector notation

Posted: 19 Jul 2022 11:32 PM PDT

I'm looking to work though an example of the energy equation in cylindrical coordinates.

$$ \frac{D K}{D t}=\frac{\partial K}{\partial t}+U_{j} \frac{\partial K}{\partial x_{j}}=\frac{\partial}{\partial x_{j}}\left(-\overline{u_{i}^{\prime} u_{j}^{\prime}} U_{i} -\frac{1}{\rho} P U_{j}+v \frac{\partial K}{\partial x_{j}}\right)+g_{i} U_{i}-\left(-\overline{u_{i}^{\prime} u_{j}^{\prime}} \frac{\partial U_{i}}{\partial x_{j}}\right)-v \frac{\partial U_{i}}{\partial x_{j}} \frac{\partial U_{i}}{\partial x_{j}} $$

$$ \frac{D k}{D t}=\frac{\partial k}{\partial t}+U_{j} \frac{\partial k}{\partial x_{j}}=\frac{\partial}{\partial x_{j}}\left(-\frac{1}{2} \overline{u_{i}^{\prime} u_{i}^{\prime} u_{j}^{\prime}}-\frac{1}{\rho} \overline{p^{\prime} u_{j}^{\prime}}\right)+\frac{\partial}{\partial x_{j}}\left(v \frac{\partial k}{\partial x_{j}}\right)+-\overline{u_{i}^{\prime} u_{j}^{\prime}} \frac{\partial U_{i}}{\partial x_{j}}-v \frac{\partial u_{i}^{\prime}}{\partial x_{j}} \frac{\partial u_{i}^{\prime}}{\partial x_{j}} $$

I think that it would be easier to transform into vector notation, but I'm struggling. How can I write the above in vector notation.

thanks in advance.

Show that $\sum_{k=1}^{\infty} \int_{0}^{\pi} \frac{\cos(kt)}{k}f(s)\cos(ks)ds$ is a compact operator

Posted: 19 Jul 2022 11:29 PM PDT

Consider $X=L[0,\pi]$ and define $$T(f)=\sum_{k=1}^{\infty} \int_{0}^{\pi} \frac{\cos(kt)}{k}f(s)\cos(ks)ds.$$ Show that $T$ is a compact operator and find the eigenvalues and eigenvectors.

First I define $T_n(f)=\sum_{k=1}^{n} \int_{0}^{\pi} \frac{\cos(kt)}{k}f(s)\cos(ks)ds$ but does $T_n \rightarrow T$ in the operator norm? I am not sure about this. Is this the right way? I will be very grateful for any hint or suggestion.

Using spherical coordinates to express a tough region

Posted: 19 Jul 2022 11:47 PM PDT

Using spherical coordinates, express the triple integral of $f$ over the region $$x^2+y^2+z^2\le 2ay\le 2a^2$$ for $a>0$.


I tried $$\int_{0}^{\pi}d{\theta}\int_{0}^{\pi}\sin \phi d{\phi} \int_{0}^{2a \sin \theta \sin \phi} f \rho^2d{\rho}$$ but this gives the full sphere instead of the region described in the problem.

What is $f|_\mathbb{Q}$?

Posted: 19 Jul 2022 11:33 PM PDT

I have some confusions.

I was reading the answer here.

Suppose $f:\mathbb R\to\mathbb R$ is a continuous function. Let $x\in\mathbb R$. Then there is a sequence of rational numbers $(q_n)_{n=1}^\infty$ that converges to $x$. Continuity of $f$ means that $$\lim_{n\to\infty}f(q_n) = f(\lim_{n\to\infty}q_n)=f(x).$$ This means that the values of $f$ at rational numbers already determine $f$.

  1. What do we mean by "the values of $f$ at rational numbers already determine $f$". What is the necessity of this line?
  2. What is $f|_\mathbb{Q}$.

Please help me. Thanks.

Applications of Convolution Integration

Posted: 19 Jul 2022 11:15 PM PDT

I was reading these math notes on Continuous Time Markov Chains and came across the following statements:

enter image description here

I have been trying to understand how the time-dependent Probability Transition Matrix can be related to the Convolution Integral. I have been trying to learn more about the Convolution Integral (e.g. https://tutorial.math.lamar.edu/classes/de/convolutionintegrals.aspx) - although I see a Convolution Integral occurring in the above equations - I am still not sure why pij(t0,t) can be written in the following way.

Can someone please help me understand the derivations of these two equations?

Thanks!

When can limit and sum be exchanged? [duplicate]

Posted: 19 Jul 2022 11:23 PM PDT

If one of $\displaystyle\lim_{x\to\infty}\displaystyle\sum_{i=1}^{n} f(i,x)$ and $\displaystyle\sum_{i=1}^{n}\displaystyle\lim_{x\to\infty} f(i,x)$ exists, does the other always exist and have the same value?

Right Triangles Line divides base into two parts.

Posted: 19 Jul 2022 11:07 PM PDT

I am trying to solve the problem related to the right triangle. So if we draw the line from the vertex to the base. it will divide base "B" into two parts, (B-X, and X). So I want to find X. Areenter image description here there any suggestions on how to solve it?

How to find angle in a triangle when we have point in triangle and some angles?

Posted: 19 Jul 2022 11:16 PM PDT

enter image description here

How to solve this in simple way? The problem is from here: Tasks

Derive the derivative of cost function of logistic regression.

Posted: 19 Jul 2022 11:48 PM PDT

I am trying to derive the derivative of the loss function of a logistic regression model. Instead of 0 and 1, y can only hold the value of 1 or -1, so the loss function is a little bit different.

The following is how I did it. The answers I found online were all a little bit different from mine. I'd be grateful if anyone could help and see if I did something wrong!

\begin{align*} L &= -\sum_{n=1}^{N} \log \sigma\left( t_n (w^\top x_n + w_0) \right)\\ \frac{dL}{dw} &=-\frac{d}{dw}\sum_{n=1}^{N} \log \sigma\left( t_n (w^\top x_n + w_0) \right)\\ &=-\sum_{n=1}^{N} \frac{d}{dw}\log \sigma\left( t_n (w^\top x_n + w_0) \right)\\ &=-\frac{d}{dw}\log \sigma\left( w^{'\top} X^{'\top} T \right) \end{align*} where $w^{'} = \begin{bmatrix} w \\ w_0 \end{bmatrix}$, and $x^{'}= \begin{bmatrix} -x_1^\top-, 1\\ ...\\ -x_n^\top-, 1\\ \end{bmatrix}$

Now let $A(x)=log(x)$, $B(x)=\sigma(x)$, $C(x)= w^{'\top} X^{'\top}T$ Then, \begin{align*} \frac{dL}{dw^{'}}&=\frac{dA(B)}{dB} \times \frac{dB(C)}{dC} \times \frac{dC}{dw^{'}}\\ &=\frac{1}{B} \times \sigma(C)(1-\sigma(C)) \times \frac{dC}{dw^{'}}\\ &=(1-\sigma(C)) \times X^{'\top}T\\ &=(1-\sigma(w^{'\top} X^{'\top}T)) \times X^{'\top}T \end{align*}

Find that of $\frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2}$ equals $= \frac{\partial^2 f}{\partial u^2} + ...$

Posted: 19 Jul 2022 11:35 PM PDT

Given:

$$\frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} = \frac{\partial^2 f}{\partial u^2} + \frac{\partial^2 f}{\partial v^2}$$

where $$u = x \cos \theta + y \sin \theta$$ $$v = -x \sin \theta + y \cos \theta$$

I am trying to show how we can get:

$$ \frac{\partial f}{\partial y} = \frac{\partial f}{\partial u} \frac{\partial u}{\partial y} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial y} = \frac{\partial f}{\partial u} \sin \theta+ \frac{\partial f}{\partial v} \cos \theta$$

Now, we can see that if we take the derivative of $u = x \cos \theta + y \sin \theta$ with respect to $y$ we can get why $\frac{\partial v}{\partial y} = sin(\theta)$ and same for $\frac{\partial u}{\partial y} = sin(\theta)$.

Question: how we can get that $ \frac{\partial f}{\partial y} = \frac{\partial f}{\partial u} \frac{\partial u}{\partial y} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial y} $ in the first place please as if we summed $ \frac{\partial f}{\partial u} \frac{\partial u}{\partial y} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial y} $ we will get $ 2 \frac{\partial f}{\partial y} $ and not $ \frac{\partial f}{\partial y} $?

Find the eigenvalues of the operator $T(x)=\sum_{i=1}^{\infty} \langle x,e_i \rangle e_{i+1}$

Posted: 19 Jul 2022 10:41 PM PDT

Let $X$ be a separable Hilbert space and let $\{e_1,e_2, \ldots \}$ be a ortonormal basis of $X$. Define $$T(x)=\sum_{i=1}^{\infty} \langle x,e_i \rangle e_{i+1}$$ Is $T$ compact or symmetric? Compute the eigenvalues and the norm of $T$

I think for $||T||=1$ I proved that $||T|| \leq 1$, for the eigenvalues I try to use the spectral descomposition theorem but first i want to see that $T$ was symmetric, any suggestion or help to find the other points of the question i will very grateful. Edit: I was proved that $T$ is not compact operator

Evaluating $\lim _{x \rightarrow 0} \frac{12-6 x^{2}-12 \cos x}{x^{4}}$

Posted: 19 Jul 2022 11:10 PM PDT

$$ \begin{align*} &\text { Let } \mathrm{x}=2 \mathrm{y} \quad \because x \rightarrow 0 \quad \therefore \mathrm{y} \rightarrow 0\\ &\therefore \lim _{x \rightarrow 0} \frac{12-6 x^{2}-12 \cos x}{x^{4}}\\ &=\lim _{y \rightarrow 0} \frac{12-6(2 y)^{2}-12 \cos 2 y}{(2 y)^{4}}\\ &=\lim _{y \rightarrow 0} \frac{12-24 y^{3}-12 \cos 2 y}{16 y^{4}}\\ &=\lim _{y \rightarrow 0} \frac{3(1-\cos 2 y)-6 y^{2}}{4 y^{4}}\\ &=\lim _{y \rightarrow 0} \frac{3.2 \sin ^{2} y-6 y^{2}}{4 y^{4}}\\ &=\lim _{y \rightarrow 0} \frac{ 3\left\{y-\frac{y^{3}}{3 !}+\frac{y^{5}}{5 !}-\cdots \infty\right\}^{2}-3 y^{2}}{2 y^{4}}\\ &=\lim _{y \rightarrow 0} \frac{3\left[y^{2}-\frac{2 y^{4}}{3 !}+\left(\frac{1}{(3 !)^{2}}+\frac{2}{3 !}\right) y^{4}+\cdots \infty\right)^{2}-3 y^{2}}{2 y^{4}}\\ &=\lim _{y \rightarrow 0} \frac{3\left\{y^{2}-\frac{2 y^{4}}{3 !}+\left\{\frac{1}{(3 !)^{2}}+\frac{2}{5 !}+y^{4}+\cdots \infty\right)-3 y^{2}\right.}{2 y^{4}}\\ &=\lim _{y \rightarrow 0}\left[\frac{-\frac{6}{3 !}+3\left\{\frac{1}{(3 !)^{2}}+\frac{2}{5 !}\right\} y^{2}+\cdots \infty}{2}\right]\\ &=\lim _{y \rightarrow 0}\left[\frac{-1+3\left\{\frac{1}{(3 !)^{2}}+\frac{2}{5 !}\right\} y^{2}+\cdots \infty}{2}\right]\\ &=-\frac{1}{2} \text { (Ans.) } \end{align*} $$

Doubt

Can anyone please explain the 5,6,7 equation line? Thank you

The difference between two independent scaled Poisson random variables

Posted: 19 Jul 2022 11:39 PM PDT

Let $X_1$ and $X_2$ be independent Poisson variables with respective parameters $\mu_1$ and $\mu_2$.

What is the distribution function of $A_1X_1 - A_2X_2$ ($A_1$ and $A_2$ are constants) ?

If $A_1 = A_2 = 1$, the result is the Skellam distribution. Here I try to derive it in a following way: \begin{equation} P(k; \mu_1, \mu_2, A_1, A2) = \exp[-(\mu_1+\mu_2)] \sum^\inf_{n = max(0, -k)}\cfrac{\mu_1^{(n+k)/A_1}\mu_2^{n/A_2}}{\Gamma((n+k)/A_1+1)\Gamma(n/A_2+1)}, \end{equation} the convolution of two Poisson distributions. But I have no idea how to simplify this formula?

How can I find the number of possible combinations of steps for two flights of stairs?

Posted: 19 Jul 2022 11:57 PM PDT

To find the number of possible combinations of steps for one flight, we use the Fibonacci sequence to calculate the answer.

For instance, if there are eleven steps on a single flight of stairs: $0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233$ $$f(11)=144$$

What should I do in order to find the number of possible combinations of steps for a staircase of two flights, given that there are eleven steps on each flight?

How do we go from $\lim\limits_{x\to \infty} x\sin{\frac{1}{x}}$ to $\lim\limits_{x\to 0^+} \frac{1}{x}\sin{x}$?

Posted: 19 Jul 2022 11:07 PM PDT

From Spivak's Calculus, Ch. 15 "Trigonometric Functions"

  1. Find $\lim\limits_{x\to \infty} x\sin{\frac{1}{x}}$.

The solution manuals says simply

$\lim\limits_{x\to \infty} x\sin{\frac{1}{x}}=\lim\limits_{x\to 0^+} \frac{1}{x}\sin{x}=1$

What is the exact manipulation at the $\epsilon$ $\delta$ level that allows us to change the form of the limit?

$\lim\limits_{x\to \infty} x\sin{\frac{1}{x}}=l$ means

$$\forall \epsilon>0\ \exists M>0\ \forall x, x>M\implies |x\sin{1/x}-l|<\epsilon$$

Let $y=\frac{1}{x}>0$.

Then for some $\epsilon>0$ there is an $M>0$ such that $\forall y$ such that $y=\frac{1}{x}$ we have

$$\left (y=\frac{1}{x}>M>0\implies 0<x<\frac{1}{M} \right )\implies \left |y\sin{\frac{1}{y}}-l \right |<\epsilon\implies \left |\frac{1}{x}\sin{x}-l \right |$$

Therefore, we are saying that for any $x$ we can form a $y$ larger than $M$, and this happens when $x$ is smaller than $1/M$ and this leads to $\left |\frac{1}{x}\sin{x}-l \right |$.

Therefore,

$$\forall \epsilon>0\ \exists \delta>0\ \forall x, 0<x<\delta \implies \left |\frac{1}{x}\sin{x}-l \right |$$

This sort of thing really makes my head spin. Is it really this difficult to grasp such manipulations? Are the manipulations of the initial limit above correct?

At this point we have

$$\lim\limits_{x\to 0^+} \frac{1}{x}\sin{x}=l$$

And this limit we can solve by L'Hôpital's Rule

$$\lim\limits_{x\to 0^+} \frac{1}{x}\sin{x}=\lim\limits_{x\to 0^+} \frac{\cos{x}}{1}=1$$

Convergence of $\sum_{k=0}^\infty \frac{2^k k!}{(2k)!}$

Posted: 19 Jul 2022 11:17 PM PDT

We want to check if the following converges (absolutely):

$$\sum_{k=0}^\infty \frac{2^k k!}{(2k)!}$$

I have seen the following solution on the internet, but it's too complex for me to understand

enter image description here

Is there an "easier" way to do this?

Wasserstein-1 distance

Posted: 19 Jul 2022 11:00 PM PDT

The $W_1$ distance between two measures $\mu$ and $\nu$ is defined by

$$ W_1(\mu,\nu)=\sup_{\|f\|_{Lip}\leq 1}\int f(x)d(\mu-\nu)(x) $$

where $\|f\|_{Lip}$ is the Lipschitz constant of $f$.

Can we relaxed this definition to be for continuous $f$ with compact support per example?

Series of almost-alternating reciprocals of squares is non-zero

Posted: 19 Jul 2022 11:28 PM PDT

Let $0<a<b<2$ be fixed, and consider the series \begin{equation} S:=\sum_{k=1}^\infty \frac{\big(\cos(k\pi a)-\cos(k\pi b)\big)}{k^2}. \end{equation} This series can readily be seen to be convergent. But for which values of $a$ and $b$, can we ensure that $S\neq 0$?

Alternate ways to teach this combinatorics question?

Posted: 19 Jul 2022 11:54 PM PDT

Suppose we have a set A with 18 members. Further, A is divided into 3 subsets B, C, and D. Set B has 5 members, C has 6, and D has 7. If we randomly select 3 members of set A, what is the probability we'll select one member from each subset?

My question is this. What is the best way to solve this problem for high school students?

My method is this. First, find the probability that the first selection is from B, the second from C, and the third from D. This yields $\frac{35}{816}$. Since the order doesn't matter, and there are 3! = 6 possible orders, multiply by 6. Thus the probability is $\frac{35}{136}$.

This method works and is pretty easy. My students sometimes struggle to understand why I multiply by 6. I'd appreciate seeing any other solutions that might be a bit easier on the understanding, even if they require a bit more work.

what does instantaneous rate of change at a point tell us about the function at that point

Posted: 19 Jul 2022 11:33 PM PDT

Instantaneous rate of change is defined as the slope of the tangent line at that point ,but it is also said to be the rate of change of a function at that instant .

I want to know,firstly, how can a point have a rate of change and what information does instantaneous rate of change even tell us about the function at that point ,I understand that as instantaneous rate of change is defined as slope of the tangent ,I can guess it tell us about gradient of function at the point i.e at which direction is the function gradient's positive ,negative or etc from that point. But how does it tell the rate of the gradient ,I mean, the function is clearly nonlinear thus it can have differnt derivative value i.e different tangent at every single point for ex the function : $$x^2$$ has different tangent at every single point thus how can the slope of a tangent tell us about the rate of change of function at that point i.e rate of change tell us how much change in output of the function will be if there is a certain amount of change in the input from this point but as in function x^2 that have different derivative at every single point ,we can't really use the slope of tangent at that point to tell us what the next point will be using the slope as rate of change ,can we?

EDIT: What I want to know is that if it is true that derivative is just an approximation by that I don't mean to imply that it is the approximate value of the slope of tangent ,I Mean to say that derivative tell us the exact value of the slope of the tangent at the point but the slope value of the tangent is not entirely equal to the exact change that happens in function value w.r.t change in the input let say for example in the function :x^2 the slope of tangent at a point i.e derivative at a point is 2x so the change in function according to this slope value at point x should be 2x but it isn't exact, the actual change in function's value is only approximately equal to the change that would happen in function' value if the slope is 2x.

Two hard integrals: $\int_{0}^{1}\frac{\log{(x)}\log{(1-x)}\log{(1+x^2)}}{x}dx$ and $\int_{0}^{1}\frac{\log^2{(x)}\log{(1+x^2)}}{1-x}dx$

Posted: 19 Jul 2022 11:39 PM PDT

I found two integrals that seem hard to evaluate: $$I_1=\int_{0}^{1}\frac{\log{(x)}\log{(1-x)}\log{(1+x^2)}}{x}dx$$ $$I_2=\int_{0}^{1}\frac{\log^2{(x)}\log{(1+x^2)}}{1-x}dx$$ I am just a beginner in logarithmic integral. So, I searched to find substitution like $x=\frac{1}{1+x}$, $x=\frac{1}{1-x}$, or $x=\frac{1-x}{1+x}$, but they didn't work. Can I ask some ideas from every one? Thank you.

EDIT: After using Mathematica, with MZIntegrate paclet gives closed-form:

$$\begin{align}I_1&=\int_{0}^{1}\frac{\log{(x)}\log{(1-x)}\log{(1+x^2)}}{x}dx\\&=G^2+\frac{5 \text{Li}_4\left(\frac{1}{2}\right)}{4}+\frac{35}{32} \zeta (3) \log (2)-\frac{119 \pi ^4}{5760}+\frac{5 \log ^4(2)}{96}-\frac{5}{96} \pi ^2 \log ^2(2)\\I_2&=\int_{0}^{1}\frac{\log^2{(x)}\log{(1+x^2)}}{1-x}dx\\&=2 G^2+\frac{35}{16} \zeta (3) \log (2)-\frac{199 \pi ^4}{5760}\end{align}$$ where $G$ is Catalan's constant.

A balls-and-colors problem, supposedly elegant solution that needs to be explained further

Posted: 19 Jul 2022 10:55 PM PDT

Here's a question asked on MO: https://mathoverflow.net/questions/41939/a-balls-and-colours-problem

A box contains n balls coloured 1 to n. Each time you pick two balls from the bin - the first ball and the second ball, both uniformly at random and you paint the second ball with the colour of the first. Then, you put both balls back into the box. What is the expected number of times this needs to be done so that all balls in the box have the same colour?

Here is a supposedly elegant answer given by Ori Gurel-Gurevich: https://mathoverflow.net/a/41985/169482

It can probably be done by looking at the sum of squares of sizes of color clusters and then constructing an appropriate martingale. But here's a somewhat elegant solution: reverse the time!

Let's formulate the question like that. Let $F$ be the set of functions from $\{1,\ldots,n\}$ to $\{1,\ldots,n\}$ that are almost identity, i.e., $f(i)=i$ except for a single value $j$. Then if $f_t$ is a sequence of i.i.d. uniformly from $F$, and $$g_t=f_1 \circ f_2 \circ \ldots \circ f_t$$ then you can define $\tau= \min \{ t | g_t \verb"is constant"\}$. The question is then to calculate $\mathbb{E}(\tau)$.

Now, one can also define the sequence $$h_t=f_t \circ f_{t-1} \circ \ldots \circ f_1$$ That is, the difference is that while $g_{t+1}=g_t \circ f_{t+1}$, here we have $h_{t+1}=f_{t+1} \circ h_t$. This is the time reversal of the original process.

Obviously, $h_t$ and $g_t$ have the same distribution so $$\mathbb{P}(h_t \verb"is constant")=\mathbb{P}(g_t \verb"is constant")$$ and so if we define $\sigma=\min \{ t | h_t \verb"is constant"\}$ then $\sigma$ and $\tau$ have the same distribution and in particular the same expectation.

Now calculating the expectation of $\sigma$ is straightforward: if the range of $h_t$ has $k$ distinct values, then with probability $k(k-1)/n(n-1)$ this number decreases by 1 and otherwise it stays the same. Hence $\sigma$ is the sum of geometric distributions with parameters $k(k-1)/n(n-1)$ and its expectation is $$\mathbb{E}(\sigma)=\sum_{k=2}^n \frac{n(n-1)}{k(k-1)}= n(n-1)\sum_{k=2}^n \frac1{k-1} - \frac1{k} = n(n-1)(1-\frac1{n}) = (n-1)^2 .$$

However, I don't understand this very terse answer at all, and so was wondering if anyone could explain it to us who are less familiar with the tools.

  1. Why is reversing the time key here? Why can't we just straightforwardly apply what's done in the last paragraph of the solution ("reverse time") to the regular "forward time" stuff?
  2. I don't understand the last paragraph at all, can someone explain it with more detail?

Peter Shor offered the following comment on MO as clarification:

Aha! I now understand Ori's answer. At time $t$, considering all steps from step $t$ to the end, there will be $k$ balls whose colors are mapped to all the other balls at the end. Considering time step $t-1$, the only way to reduce $k$ is to choose two of these $k$ influential balls, and have the color of one mapped to that of another. This gives the recursion in his answer. Very nice, although it could be explained better.

But I don't understand Peter's clarification either!

So I'm wondering if anyone could explain Ori's answer in a more user-friendly way, so low-level students like myself can understand (and not just the research mathematicians of MO).

Fourier transform is continuous using $\epsilon$-$\delta$ definition of continuity

Posted: 19 Jul 2022 11:48 PM PDT

Let $f\in L^1(\mathbb{R}^n)$. Then the Fourier transform $\hat{f}$ is given by $$\hat{f}(\xi):=\int_{\mathbb{R}^n} e^{-2\pi i x\cdot \xi} f(x) dx.$$ The dominated convergence theorem confirms that the Fourier transform is a continuous function. Indeed, if the sequence $\xi_n\rightarrow \xi_0$ then $\hat{f}(\xi_n)\rightarrow\hat{f}(\xi_0)$ since, for every $n\geq 1$, $|e^{-2\pi i x\cdot \xi_n} f(x)|\leq |f(x)|\in L^1(\mathbb{R}^n)$.

I want to prove that $\hat{f}$ is continuous using the $\epsilon$ -$\delta$ definition.

My attempt:

Fix $\xi_0$ in $\mathbb{R}^n$ and let $\epsilon>0$. We need to prove that there exists $\delta>0$ such that $$|\xi-\xi_0|<\delta \implies |\hat{f}(\xi)-\hat{f}(\xi_0)|<\epsilon.$$

We have $$\hat{f}(\xi)-\hat{f}(\xi_0)=\int_{\mathbb{R}^n} (e^{-2\pi i x\cdot \xi}-e^{-2\pi i x\cdot \xi_0}) f(x) dx.$$ Define $$g(t):=e^{-2\pi i x\cdot ((1-t)\xi+t\xi_0)}.$$ Then $g$ is a smooth function. By the mean-value theorem $$e^{-2\pi i x\cdot \xi}-e^{-2\pi i x\cdot \xi_0}=g(1)-g(0)= g^{\prime}(s)= 2\pi i e^{-2\pi i x\cdot ((1-s)\xi+s\xi_0)}x\cdot (\xi-\xi_0)$$ for some $s\in (0,1)$.

Therefore $$\hat{f}(\xi)-\hat{f}(\xi_0)=2\pi i (\xi-\xi_0)\cdot\int_{\mathbb{R}^n} e^{-2\pi i x\cdot ((1-s)\xi+s\xi_0)}x f(x) dx.$$

Now, if $f$ has compact support, or decay faster than $1/|x|$ as $|x|\rightarrow \infty$ we are done since, in that case,

$$|\hat{f}(\xi)-\hat{f}(\xi_0)|\leq C |\xi-\xi_0|$$

with $C=2\pi \int_{\mathbb{R}^n} |x| |f(x)| dx$.

If $f$ is just an $L^1$ function we must use the cancellation due to the oscillatory exponential factor (may be using Riemann-Lebesgue lemma).

$p^k\mid |G|$ for $k=1,\dots,\alpha$, and $n_k=\#$ of subgroups of order $p^k$. Is $n_k\equiv 1\pmod p$ for every $k=1,\dots,\alpha$?

Posted: 19 Jul 2022 11:55 PM PDT

Let $p$ be a prime divisor of the order of a finite group $G$, and $\alpha$ the greatest power of $p$ dividing $|G|$. Let $n_k$ be the number of subgroups of $G$ of order $p^k$ (so, $n_\alpha$ is the number of $p$-Sylow subgroups of $G$, usually referred to as "$n_p$"). The following three facts are compatible with the subsequent (conjectured) claim:

  1. $n_\alpha\equiv 1\pmod p$, by Sylow III;
  2. $n_k\ge 1$, for every $k=1,\dots,\alpha$: this is, e.g., the Theorem 2.12.1 in Herstein's Topics in algebra;
  3. $n_1\equiv 1\pmod p$: this is a corollary of this result: in fact, in the notation of the link, $p\mid |X|=n_1(p-1)+1=n_1p-(n_1-1)$ if and only if $p\mid n_1-1$ if and only if $n_1\equiv 1\pmod p$.

Claim. $n_k\equiv 1\pmod p$ for every $k=1,\dots,\alpha$.

According to this and this, the claim holds true for the particular cases $|G|=2^33$ and $|G|=2^43$, respectively.

Q: Is the claim true?

How do we tell whether ‘some’ means Exactly one or At least one? [closed]

Posted: 19 Jul 2022 10:58 PM PDT

A natural number $n$ is even if it has the form $n = 2k$ for some $k$ $\in$ $\mathbb N.$

I'm confused by the usage of 'some' in this definition. Isn't "exactly one" more suitable for this definition? I mean, when you choose one value of $n,$ just one value of $k$ matches that.

How do we know whether 'some' means just one or at least one? Depending on the context?

Anyway, why do we say "some", rather than "all" or "any", to mean at least one?

"With" Meaning in Proof, Compared to "Such That"

Posted: 19 Jul 2022 10:43 PM PDT

What does 'with' mean in: For all real numbers x, there is some real number y with y = $x^2$

My thought process so far is that it is similar to: $\forall$ x $\epsilon$ ${R}$, $\exists$ y $\epsilon$ ${R}$ s.t. y = $x^2$

But I thought 'with' and 'such that' would have different meanings because the other questions on my assignment uses the word 'such that' instead of 'with'.

Characterize normal subgroups - Find all subgroups of $S_3$ conjugate to $\{id, (1,3) \}$ - Fraleigh p. 143 14.29

Posted: 19 Jul 2022 11:50 PM PDT

(27.) A subgroup H is conjugate to a subgroup K of a group G
(viz. p. 141 $K \le G$ is a conjugate subgroup of $H$), if $i_g[H] = gHg^{-1} =K$ for some $g \in G$.
Show that conjugacy is an equivalence relation on the collection of subgroups of G.

(28.) Characterize the normal subgroups of a group G in terms of the cells where they appear in the partition given by the conjugacy relation in exercise (27.)

Answer. We see that the normal subgroups of G are precisely the subgroups in the one-element cells of the conjugacy partition of the subgroups of G.

(29.) Referring to Exercise 27, find all subgroups of $S_3$ conjugate to $\{id, (1,3)(2) \}$.

(27.) Answer on p. 50 says $gHg^{-1} = K$ means for each $k \in K, k = ghg^{-1}$ for exactly one $h \in H$. Why is $h$ unique here? As I asked here, $gH = Hg \iff gh_1 = h_2g$ where $g_1$ can $\neq g_2$ ?

(28.) Is there a picture please for the answer to (28.) to help me understand?

(29.) References respectively: http://www.sfu.ca/~jtmulhol/math302/notes/302notes.pdf p. 126 and Source http://www.auburn.edu/~huanghu/math5310/alg-hw-ans-13 i think.pdf enter image description here

(29.) wants us to find all $K \le S_3$ such that $g\{id, (1,3)(2) \}g^{-1} = K$ for all $g \in S_3$.
Hence why does the solution fret about only 3 elements of $S_3$ for $g$? What about the other 3?

No comments:

Post a Comment