Friday, July 1, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


please help with this trigonometric equation

Posted: 01 Jul 2022 03:21 PM PDT

how do i solve this?

What is the maximum value of cos^4(x) + sin^2 (x) + cos (x), when x ≥ 0?

any tips on how to solve these types of trigonometric equations would be really helpful. i also tried to solve it with the help of graphs but couldn't manage with 4th degree term of cosine.

Drawer with Silver Coin problem's solution clarification

Posted: 01 Jul 2022 03:20 PM PDT

I am trying to understand a problem solution but I can't get an answer for my confusion. Also, this question is asked before in another post . Below is the problem and solution description:

Each of 2 cabinets identical in appearance has 2 drawers. Cabinet A contains a silver coin in each drawer, and cabinet B contains a silver coin in one of its drawers and a gold coin in the other. A cabinet is randomly selected, one of its drawers is opened, and a silver coin is found. What is the probability that there is a silver coin in the other drawer?

The reference solution:

A = the cabinet A is chosen

B = the cabinet B is chosen

S = a silver coin is chosen

$P(A|S) = \frac{P(S|A)P(A)}{(P(S|A)P(A)+P(S|B)P(B)}$

$P(A|S) = \frac{1\times0.5}{1\times0.5 + 0.5\times0.5}$

$P(A|S) = \frac{2}{3}$

My confusion:

As the question stated a cabinet is randomly selected, why the verified solution calculate the cabinet A is chosen $P(A|S)$ instead of the cabinet B is chosen $P(B|S)$? I am stucked at this point. Is that any reasoning behind?

How to find equilibrium points of this differential equation

Posted: 01 Jul 2022 03:13 PM PDT

I'm trying to find the equilibrium points for the vector field of the following differential equation: $$\frac{d^2\Psi}{dt^2}= \frac{g}{\xi\ell}\sin{\xi\Psi}-\frac{V^2}{\xi \ell R}\frac{\cos{\xi \Psi}}{\cos{\Psi}}\mathrm{sgn}\Psi,$$ for $\ 0≤\xi≤1.$

I tried doing this:

At equilibrium points, conditions are: $\frac{d^2\Psi}{dt^2}=\frac{d\Psi}{dt}=0;\ \Psi=\Psi_{eq}$. Thus, the differential equation reduces to this: $$0=\sin{\xi\Psi_{eq}}-\frac{V^2}{g R}\frac{\cos{\xi \Psi_{eq}}}{\cos{\Psi_{eq}}}\mathrm{sgn}\Psi_{eq},$$ where $0≤\frac{V^2}{g R}≤2\ \mathrm{or}\ 3.$

This most likely isn't solvable analitically, so I thought about doing an average or approximating. What do you suggest?

Eigenvalues of block matrix with zero diagonal

Posted: 01 Jul 2022 03:11 PM PDT

I'm looking for the eigenvalues of a block matrix of the form

$$ M=\begin{pmatrix} \boldsymbol{0} & C \\ C^\dagger & \boldsymbol{0} \end{pmatrix}= \begin{pmatrix} \boldsymbol{0} & H(I-uu^\dagger) \\ (I-uu^\dagger)H & \boldsymbol{0} \end{pmatrix} $$

Where $H$ is a general positive semidefinite Hermitian matrix (dim: $2^n\times 2^n$) and $u$ is a general unit vector. If it makes things simpler, we can take the elements of $H$ and $u$ to be real. $M$'s clearly Hermitian so the eigenvalues are real. I can show 2 of $M$'s eigenvalues are zero, corresponding to

$$v=\begin{pmatrix}1\\0\end{pmatrix}\otimes u \quad\text{and}\quad v=\begin{pmatrix}0\\1\end{pmatrix}\otimes H^{-1}u$$

I can also show they need to come in pairs $\lambda, -\lambda$ since we have

$$ \left|\begin{pmatrix} A & B \\ C & D \end{pmatrix}\right| = \det (A)\det\left(D-CA^{-1}B\right)\to \left|\begin{pmatrix} -\lambda I & C \\ C^\dagger & -\lambda I \end{pmatrix}\right| = \det (-\lambda I)\det\left(-\lambda I+CC^\dagger/\lambda\right) $$

which is clearly invariant under $\lambda\to -\lambda$.

What can be said about the remaining eigenvalues? An explicit formula would be awesome but bounds on their magnitudes or anything really would be helpful.

what will be value of a,b in quicksort recuurrence relation in the best case compared to the general form of divide and conquer recurrence relation?

Posted: 01 Jul 2022 03:10 PM PDT

I wasn't able to solve this? Can you guys please tell me whats its answer

possible answers? a, a=1,b=2 b, a=2,b=1 c, a=3,b=2 d, a=2,b=2

Better upper bound for $ \sum_{k=2}^{\infty} \frac{1}{2^{k-1}} \sum_{n=1}^{\infty} (n! \: \text{mod} \: k) $

Posted: 01 Jul 2022 03:17 PM PDT

Since the sum $\sum_{n=1}^{\infty} (n! \: \text{mod} \: k)$ will be zero beyond $k-1$, the series could be interpreted as an finite sum of length $k-1$. Also, the max value of $n! \: \text{mod} \: k$ is naturally $k-1$.

Taken together one has a max value of:

$$ \sum_{k=2}^{\infty} \frac{(k-1)^2}{2^{k-1}} = 6 $$

Mathematica gives the value of $$ \sum_{k=2}^{\infty} \frac{1}{2^{k-1}} \sum_{n=1}^{\infty} (n! \: \text{mod} \: k) \approx 3.005674093 $$ so my upper bound is clearly quite crude. What would be a better upper bound?

Edit: To be clear my upper bound is $\frac{x (x+1)}{(1-x)^3}$ for $ \sum_{k=2}^{\infty} x^{k-1} \sum_{n=1}^{\infty} (n! \: \text{mod} \: k)$.

Mathematical explanation of the intuition behind the Lagrange multiplier

Posted: 01 Jul 2022 03:06 PM PDT

Suppose we are to maximize $U_I(x,y)$ subject to $p_xx+p_yy = I$ where $U_I(x,y)$ is differentiable everywhere and $\frac{\partial U}{\partial x} > 0$, $\frac{\partial U}{\partial y} > 0$ and $U$ is quasi-concave. The latter properties of $U$ are given to ensure that the Kuhn-tucker conditions hold.

If $I' > I$, can we then say that $U_{I'}(x^{*},y^{*}) - U_{I}(x^{*},y^{*}) = (I'-I)\lambda$?

I read that when there's a change in the constraint from $p_xx+p_yy = I$ to $p_xx + p_yy = I'$ (where $I' > I$), the change in $U^{*}$ will be given by $(\Delta I) \cdot \lambda$. I verified that this works for the Cobb-Douglas production function $U(x,y) = x^a y^b$, but does it work in general? In other words, is it true that $$\frac{\Delta U_{optimal}}{\Delta I} = \lambda?$$

Search for a non absorbing Markov chain

Posted: 01 Jul 2022 03:02 PM PDT

According to the usual definition: "A Markov chain is absorbing if it has at least one absorbing state, and if from every state it is possible to go to an absorbing state." (taken for example from Darmouth College - slide #3)

Now I am looking for an example of a Markov chain having an absorbing state, but without the property that, starting from any state, one can reach this absorbing state... (meaning that the chain can't be defined as absorbing).

I'm sorry to say that I have no imagination so as to construct such an example.

Does someone have some? I have to say I would really appreciate it...

Using the poisson distribution to estimate trips for escalators

Posted: 01 Jul 2022 03:01 PM PDT

We've got a few escalators side by side in our main building, and I'm trying to predict how many minutes a month they'll be running collectively as a function of the number of escalator rides, accounting for escalators turning off when not in use, and riders needing a second escalator when one is too busy. The math turns out to be quite complicated all things considered.

Escalators are off by default until a sensor detects a rider approaching one. The escalator stays on for exactly 5 minutes from when the most recent rider got on, unless another rider gets on, in which case that time is extended to 5 minutes after the new rider. I know as an input I get between 20,000 and 200,000 escalator rides per month, divided across our 4 escalators (highly variable as we have a strong holiday effect). Riders almost never get on an escalator that isn't already moving, unless at least 2 people hopped on during the same second, in which case they'll take the next available escalator, and if two escalators have two people hop on each in the same second, riders go for the third and so on until all 4 escalators are in use. We can assume infinite escalators though if it makes it convenient.

I tried a poisson model but ran into issues modeling the additional 5 minutes when a second rider gets on. I also tried simulating this in Python by sampling times between rides from an exponential but 180,000 inputs (20K - 200K rides) multiplied by 110,000 intervals between rides on average per month multiplied by 100 trials for each lambda to gain confidence is just shy of 2 trillion samples from an exponential distribution, and my likely quite inefficient simulation was taking an obscenely long time to make progress. I cut down to jumps of 100 to get 1,800 inputs, but that's still 20 billion samples from an exponential.

Is there a more efficient of figuring out the predicted number of minutes the escalators are collectively running as a function of number of rides per month? Ideally I'd get a probability density function for any given number of rides per month, so I've got an estimate of variance too.

Spivak: Determine if a function $F(x)=\int_0^x f$ is differentiable at $0$. $f$ is a function given by a particular graph.

Posted: 01 Jul 2022 02:58 PM PDT

The following is a problem from Ch. 14 "The Fundamental Theorem of Calculus", from Spivak's Calculus

Consider the function depicted in the picture below

At which points $x$ is $F'(x)=f(x)$?

enter image description here

The solution manual says

All $x\neq 0$. $F$ is not differentiable at $0$ because $F(x)=0$ for $x\leq 0$, but there are $x>0$ arbitrarily close to 0 with $\frac{F(x)}{x}=\frac{1}{2}$

I'd like to understand this solution better. Here is my attempt at filling in the intermediate steps in the proof.

$f$ is continuous everywhere except at $x=0$. Thus, we can apply the first fundamental theorem of calculus to conclude that

$$F'(x)=f(x)$$

when $x\neq 0$.

$f$ also happens to be integrable everywhere.

What happens at $x=0$ is the question.

If $F$ is differentiable at $0$ then the following limits exist and are equal to each other

$$\lim\limits_{h\to 0^-} \frac{F(h)}{h}=\lim\limits_{h\to 0^+} \frac{F(h)}{h}$$

Now

$$\lim\limits_{h\to 0^-} \frac{F(h)}{h}=0$$

Do we also have $\lim\limits_{h\to 0^+} \frac{F(h)}{h}=0$?

There exists an $n\in\mathbb{N}$ such that $\frac{1}{2^n}<h$.

The sum of areas of triangles formed by $f$ up to $\frac{1}{2^n}$ is

$$F(1/2^n)=\frac{1 \cdot \sum\limits_{i=n}^\infty \left ( \frac{1}{2^i}-\frac{1}{2^{i+1}}\right )}{2}$$

I'm not sure what the infinite sum is, but I'm guessing it is $\frac{1}{2^n}$

Thus

$$\frac{F(1/2^n)}{2^n}=\frac{1}{2}$$

That is, there is always $x$ such that $0<x<h$ and

$$\frac{F(x)}{x}=\frac{1}{2}$$

Therefore, $\lim\limits_{h\to 0^+} \frac{F(h)}{h}$ doesn't exist.

Hence $F$ is not differentiable at $0$.

Is this correct?

Existence of Analytic Continuation of Second-Order Linear, Homogeneous Differential Equation

Posted: 01 Jul 2022 02:46 PM PDT

I am self-studying the book Riemann Surfaces by Simon Donaldson. He leaves the following proposition as an exercise:

Proposition

Suppose $\Omega$ is an open subset of $\mathbb{C}$ and $\gamma:[0, 1] \to \Omega$ is a path in $\Omega$. Furthermore, suppose $P$ and $Q$ are holomorphic functions defined on a neighborhood $U_0$ of $\gamma(0)$ with analytic continuations along $\gamma$. If $A$ is a solution to \begin{equation} \label{eq:1.1} A'' + P A' + Q A = 0 \end{equation} on $U_0$, then $A$ has an analytic continuation along $\gamma$, through solutions of the equation.

Solution Attempt

It is clear to me that one can find local power series solutions at each point along $\gamma$; however, I am not sure how to construct local solutions such that they agree on their common domain of definition. Here is what I have so far for my proof:

Suppose $A$ is a solution to the differential equation on $U_0$, so it is holomorphic on $U_0$. Then, since $A, P, Q$ are holomorphic on $U_0$, they have power series expansions centered at $z_0 \equiv \gamma(0)$ \begin{equation*} A(z) = \sum a_n (z - z_0)^n, \quad P(z) = \sum p_n (z - z_0)^n, \quad Q(z) = \sum q_n (z - z_0)^n \end{equation*} on $U_0$. Furthermore, after plugging in these power series to the differential equation and matching coefficients, we find that for for each $n \geq 0$, the coefficients of the power series expansion of $A$ satisfy \begin{equation} (n + 2)(n + 1) a_{n + 2} + \sum_{i \geq 0}(n + 1 - i) p_i a_{n + 1 - i} + \sum_{j \geq 0} q_j a_{n - j} = 0, \end{equation} so the solution to the differential equation is uniquely defined by the values of $a_0$ and $a_1$. By definition, since $P, Q$ have analytic continuations along $\gamma$, there exists families of holomorphic functions $P_t, Q_t$ for $t \in [0, 1]$, where $P_t, Q_t$ are defined on neighborhoods $U_{t, P}, U_{t, Q}$, respectively, of $\gamma(t)$ such that $P_0 = P, Q_0 = Q$ on some neighborhoods $U_P, U_Q$ of $\gamma(0)$, respectively, and for each $t_0 \in [0, 1]$, there exist $\delta_{t_0, P}, \delta_{t_0, Q}$ such that if $|t - t_0| < \delta_{t_0, P}$, then $P_t = P_{t_0}$ on $U_{t, P} \cap U_{t_0, P}$ and if $|t - t_0| < \delta_{t_0, Q}$, then the functions $Q_t = Q_{t_0}$ on $U_{t, Q} \cap U_{t_0, Q}$. Define the family of holomorphic functions $A_t$ for $t \in [0, 1]$ on the neighborhoods $U_t \equiv U_{t, P} \cap U_{t, Q}$ as follows. (From here on, I am not sure about how to proceed.)

Any help would be appreciated. Thank you!

Can any cubic polynomial be transormed in canonical form?

Posted: 01 Jul 2022 03:02 PM PDT

Can any cubic polynomial be transformed from $Ax^3+Bx^2+Cx+D$ to $a(b(x-h))^2 + k$?

For example, how could $x^3+\frac{3x^2}{2}+\frac{x}{2}$ be transformed?

Statistics: making scores a number between 0 and 1

Posted: 01 Jul 2022 02:43 PM PDT

Silly question, but I do not know how to do it:

I let a computer play 100.000 games. I want to do some statistics on that. I would like to see how good the computer performed.

Question What is the correct action to take from here?

My thoughts: taking the draws, the wins and losses. On the x-axis I put the number of games. On the y-axis I put the wins/draws/losses. Does that ok or silly?

I figured it might be smart to make them y-axis between 0 and 1. Is there a formula for that? Something I could use with R?

When do we have that $j^{-1}(h(s))=f_s(x)?$

Posted: 01 Jul 2022 02:35 PM PDT

Given $f_s(x)$ a function embedded in quadrant III of the plane, i.e., $\Bbb R^2_{-},$ use the map $j:=\exp{f_s(\log x)}$ obtaining $g_s(x)$ embedded in $(0,1)^2.$ Transform $g_s(x)$ using the linear transformation $\int_0^1 g_s(x)~dx=h(s).$ Then finally take the inverse transform $j^{-1}$ so, $j^{-1}(h(s)).$

When do we have that $j^{-1}(h(s))=f_s(x)?$

I can give an example that does not work. Let $f_s(x)=s/x.$ This transforms under $j$ to $\exp{\frac{s}{\log x}}=g_s(x).$ Then $\int_0^1g_s(x)~dx$ (which happens to be the mellin transform of $\exp\frac{1}{\log x}$ ) becomes $h(s)=2\sqrt{s}K_1(2\sqrt{s})$ where $K_1$ is modified bessel. Clearly the inverse mapping acting on this function is not of the form $1/x.$

Let $A$ be an $n \times n$ symmetric integer matrix with even entries on the main diagonal. Then (i) if $n \equiv 0$ mod $4$, then

Posted: 01 Jul 2022 02:36 PM PDT

Let $A$ be an $n \times n$ symmetric integer matrix with even entries on the main diagonal. Then (i) if $n \equiv 0$ mod $4$, then det$(A)\equiv $ 0 or 1, mod 4; (ii) if $n \equiv 2 $mod $4$; then det$(A) \equiv 0$ or $−1$ mod $4$.

In above, by even entries on the main diagonal, I mean that the entries are lying in $\{0,\pm2,\pm4,\pm6,\ldots\}$

Least square for complex exponential function

Posted: 01 Jul 2022 02:46 PM PDT

Real problem: I have $n$ points $(t_i, \ x_i)$ (from an experiment) and I want to find the best values for $X = (A, B, \xi, \omega)$ to fit

$$ x(t) = \exp \left(-\xi \omega t\right)\left(A\cos \mu \omega t + B \sin \mu \omega t\right) $$

With $0 < \xi < 1$ and $\mu = \sqrt{1-\xi^2}$.

My minimizing function $J$ (least square) is

$$ J(A, \ B, \xi, \ \omega) = \dfrac{1}{2}\sum_{i=1}^{n} (x(t_i) - x_i)^2 $$

Then, the equation to solve is

$$ \nabla J = \begin{bmatrix}\dfrac{\partial J}{\partial A} & \dfrac{\partial J}{\partial A} & \dfrac{\partial J}{\partial \xi} & \dfrac{\partial J}{\partial \omega}\end{bmatrix} = \vec{0} $$

As it's not a linear equation, I use Newton's method

$$ X_{j+1} = X_{j} - \left[\nabla \nabla J\right]^{-1} \cdot \left[\nabla J\right] $$

Unfortunately, this method converges only when I get a good estimative for a initial value $X_{0}$. And it's very bad cause my data is very noisy.

Complex problem So, I thought about transforming my equation into a complex to reduce the number of variables to $C, \ s \in \mathbb{C}$:

$$ s = -\xi \omega + i \cdot \mu \omega \in \mathbb{C} $$

$$ C = A + i \cdot B \in \mathbb{C} $$

$$ z(t) = C \exp \left(s t\right) $$

And this equation is a lot easier to solve, cause I can separate and make a linear regression (least square) using the equation:

$$ \ln z = \ln C + s \cdot t $$

And once found $z(t)$ I can get $x(t)$ like

$$ x(t) = \Re(z(t)) $$

Unfortunately I don't have the 'imaginary' points $y_i$ to put in $z_i = x_i + i \cdot y_i$ and compute the regression in the complex plane.

Question: Is there any possibility to make this work? I mean, transform a real problem (with 4 real variables) into a complex problem (with 2 complex variables) and get an method more stable (less dependent on $X_0$ or even get rid of it)?

Context: The function $x(t)$ is the solution for the ODE of a mass-spring-damper when $0 < \xi < 1$ and I got experimental data $(t_i, \ x_i)$ with noise. So I want the best values for $\xi$ and $\omega$ to characterize my system.

$$ m \ddot{x} + c\dot{x} + k x = 0 $$

$$ \ddot{x} + 2\xi \omega \dot{x} + \omega^2 x = 0 $$

Non-diagonal n-th roots of the identity matrix

Posted: 01 Jul 2022 02:38 PM PDT

Q. Are there $n$-th root analogs of this non-diagonal cube-root of the $3 \times 3$ identity matrix?

\begin{align*} \left( \begin{array}{ccc} 0 & 0 & -i \\ i & 0 & 0 \\ 0 & 1 & 0 \\ \end{array} \right)^3 = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) \end{align*}

I am looking for $A^n=I$, where $I$ is any dimension $\le n$.

(A naive question: I am not an expert in this area.)

Are closed rotationally symmetric surfaces which can be isometrically embedded into Euclidean space, always embedded as surfaces of revolution?

Posted: 01 Jul 2022 03:07 PM PDT

Setup:
Consider a closed (compact without boundary) $2$-manifold $(\Sigma, h)$ with an isometric embedding into Euclidean $3$-space $\Phi \colon (\Sigma, h) \to (\mathbb{R}^3, \delta)$, that is $\delta|_{\Phi(\Sigma)}=h$.

Question:
If $(\Sigma, h)$ has $S^1$-symmetry, then does the image $\Phi(\Sigma)\subset \mathbb{R}^3$ automatically have $S^1$ symmetry as a subset of Euclidean space? I.e., does $\Phi(\Sigma)\subset \mathbb{R}^3$ look like a surface of revolution?

Phrased another way:
Let $iso(\Sigma, h)$ be the isometry group of the surface $(\Sigma, h)$ and let $iso(\mathbb{R}^3, \Phi(\Sigma), \delta)\subset iso(\mathbb{R}^3,\delta)$ be the set of isometries of Euclidean space $f\colon (\mathbb{R}^3, \delta) \to (\mathbb{R}^3, \delta)$ which preserve the surface as a subset, i.e. $f(\Phi(\Sigma)) =\Phi(\Sigma) \subset \mathbb{R}^3$. If $S^1\subset iso(\Sigma, h)$ is a subgroup, then is it true that $S^1 \subset iso(\mathbb{R}^3, \Phi(\Sigma), \delta)$ is also a subgroup?

Discussion:
I'm nearly certain that the answer to this question is "yes". Although my intuition for this problem mainly comes from the vague idea that embedded closed surfaces should be (infinitesimally) rigid and my inability to conceive of a potential counter example.

This problem is related to the Weyl Embedding Problem, the problem of isometrically embedding positive Gauss curvature $(S^2, h)$ into $(\mathbb{R}^3, \delta)$ so that the embedding to strictly mean convex. However the problem differs in three key ways: we assume that $(\Sigma, h)$ has $S^1$ symmetry, $(\Sigma, h)$ may have negative Gauss curvature at some points, and $\Sigma$ may be homeomorphic to either $S^2$ or $T^2$. My hope is the the symmetry is enough to compensate for the 'lack of rigidity' introduced by negative curvature.

Any surface $(\Sigma, h)$ with $S^1$ symmetry admits metric of the form $h = ds^2 + \rho^2(s) d\theta^2$. If there exists a function $z(s)$ such that $\dot{\rho}^2 + \dot{z}^2=1$, then (using cylindrical coordinates) $\Phi(s,\theta) := (\rho(s), z(s), \theta) \in \mathbb{R}_{\geq0} \times \mathbb{R}\times S^1 = \mathbb{R}^3$ describes an isometric embedding $\Phi\colon (\Sigma, h) \to (\mathbb{R}^3, \delta)$ which clearly satisfies the property that $S^1 \subset iso(\mathbb{R}^3, \Phi(\Sigma), \delta)$ as desired. The real question is two parts:

  1. If $z(s)$ exists, then does $S^1 \subset iso(\mathbb{R}^3, \Psi(\Sigma), \delta)$ for every isometric embedding $\Psi\colon (\Sigma, h) \to (\mathbb{R}^3, \delta)$, not just the ones constructed through this method?
  2. If $z(s)$ does not exists, then is it true that no such isometric embedding with the desired property exists?

Some things to keep in mind:

  • There usually "boundary conditions" that need to be satisfied to guarentee $(\rho(s), z(s), \theta)$ is at the poles, but those are automatically satisfied because $(\Sigma, h)$ is assumed to be a closed smooth manifold.
  • If the Guass curvature of $(\Sigma, h)$ were positive, then any embedding $\Phi \colon (\Sigma, h) \to (\mathbb{R}^3, \delta)$ into Euclidean space would be unique up to rigid motions, and the problem would be more or less trivial.
  • If $(\Sigma, h)$ were allowed to be not closed then there would be trivial counter examples. Consider $(\Sigma, h) = (\mathbb{R}^2, \delta)$ embedded into $(\mathbb{R}^3, \delta)$ as a parabola cross a line (the taco embedding). Clearly $(\mathbb{R}^2, \delta)$ is rotationally symmetric but the taco isn't.
  • Even in the positive curvature case embeddings are not unique for non-closed surfaces. A classic example is the fact that spherical caps (looks like a contact lens that goes in someone's eye) are not rigid. So somehow the closedness of $\Sigma$ is important to the proof.
  • This question has a natural generalization which I state below. Perhaps seeing the problem in this generality will allow people to use techniques that I had not considered.

Generalized Statement:
Let $(\Sigma^{n-1}, h)$ for $n>2$ be a closed Riemannian manifold with an isometric embedding $\Phi \colon (\Sigma, h) \to (M^n, g)$. Does $\Phi$ induce an injective homomorphism of the isometry groups $\Phi_* \colon iso(\Sigma, h) \to iso(M, \Phi(\Sigma), g)$?
(Note that there are trivial counter examples when $n=2$ since all embeddings of $S^1$ are isometric.)

Similar triangles and Concentric Circles

Posted: 01 Jul 2022 02:55 PM PDT

We have two similar triangles, $ABC$ and $A'B'C'$. The triangles are each inscribed in a circle. The two circles are concentric at a point $O$ and their radii differ by a length $r$. Assume, without loss of generality, that the triangle $A'B'C'$ is larger than the triangle $ABC$.

What are the ratios $\frac{AB}{A'B'} = \frac{BC}{B'C'} = \frac{CA}{C'A'}$?

What are the lengths $AA'$, $BB'$ and $CC'$? Assume that the two triangles are oriented so as to minimize these lengths.

An important caveat: assume we are in some general, non-continuous metric space, i.e. we may not rely on being able to posit additional points, or rely on the "Parallel Postulate", or compute arccosines (because perpendicular projections are not a thing in general metric spaces).

However, a solution in a continuous Euclidean metric space would still be helpful.

Edit: While I could work out some formulas using theorems in a Euclidean metric space, I am not sure if any of those would generalize to general, non-continuous metric spaces. For example, my actual metric space might be on genomic strings under Levenstein edit distance, or images (n-dimensional probability distributions) under Wasserstein distance.

So far, I have been able to come up with a definition of circles using the law of cosines, as opposed to using a center and radius. However, I cannot justify the existence of a point $O$ that lies at the center of both circles.

Edit 2: As pointed out in the comments, knowing the radii difference is not enough to compute the ratios. With this in mind, I'd like to add an extra piece of information. Let's assume we know the lengths $AB$, $BC$ and $CA$.

definition and derivation of sine - unit circle - rectangle

Posted: 01 Jul 2022 03:07 PM PDT

sine is defined as:

$sin(\alpha) = \frac{opposite}{hypotenuse}$

My questions are:

  • why is sine only defined on right angles?
  • where does the definition comes from/is derived?
  • is the unit circle used to define sine or just to show the values?

as far as I know sine is defined and derived with the idea of Similarity of Triangles but I do not understand exactly why and why these triangles have to be right angles

A condition on where an intermediate field is the fixed field of a subgroup of the Galois group.

Posted: 01 Jul 2022 02:53 PM PDT

This is a question from Fenrick's Introduction to the Galois Correspondence.

Specifically, it is a question from problem 1.4 in chapter 3 on page 146. Let $K$, $L$, and $F$ be fields with $K\subset L\subset F$. Let $G=Gal_K F$ and let $H<G$. Show that if $\tau (u)=u $ for all $u\in L$ and $[L:K]\geq (G:H)$, then $L=H'$, where $H'$ is the fixed field of $H$.

My question is this: what is $\tau $? Fenrick does not specify. I have some guesses, such as that $\tau $ must be an arbitrary element of $H$, but I'd rather not try to answer the problem while having guessed the wrong hypothesis about $\tau $. Note that I am not looking for a solution to the problem in the text, merely a clarification on notation.

Quick evaluation of definite integral $\int_0^\pi \frac{\sin 5x}{\sin x}dx$

Posted: 01 Jul 2022 02:29 PM PDT

Find$$\int_0^{\pi}\frac{\sin 5x}{\sin x}dx$$

I can solve it by involving polynomials in sine and cosine as shown in the links below, but it's huge (doing double angle formulas twice; I noticed that using polynomials in cosine is better because the integral spits out sines which are 0 between the limits) so I want a faster method, if it exists. Please don't use contour integration:)

The only thing I noticed is that the integrand is symmetric about the midpoint in the given interval, i.e.$$\frac{\sin 5x}{\sin x}= \frac{\sin 5(\pi-x)}{\sin(\pi- x)}.$$ Determine the indefinite integral $\int \frac{\sin x}{\sin 5x}dx$

Integral of $\int \frac{\sin(3x)}{\sin(5x)} \, dx$

Expressing $\frac {\sin(5x)}{\sin(x)}$ in powers of $\cos(x)$ using complex numbers

An Inverse Jensen Inequality

Posted: 01 Jul 2022 03:20 PM PDT

Given $\sigma>0$, let $X\sim N(0,\sigma^2I_d)$ be a normal random variable in $\mathbb R^d$. Prove or disprove: for any 1-Lipschitz function $f:\mathbb R^d\rightarrow\mathbb R$, there holds $$ \log\big(\mathbb E[e^{f(X)}]\big) - \mathbb E[f(X)] \leqslant d\sigma^2. $$ Here, $f$ is 1-Lipschitz means that $f$ is continuous in $\mathbb R^d$ and $|f(x)-f(y)|\leqslant |x-y|$ for any $x,y\in\mathbb R^d$.

This problem comes from the proof of Proposition 3 in this paper, where the authors claimed that the log-Sobolev inequality can be used to prove eq. 25. The problem above can be viewed as a simplified version of eq. 25 of the paper. However, I found it non-trivial to obtain eq. 25 using the log-Sobolev inequality, the log-Harnack inequality, or other functional inequalities I know. It looks more like an inverse type of the Jensen's inequality, which is the reason this problem is so named.

Here are some direct observations of this problem. First of all, by Jensen's inequality, one has $$ \log\big(\mathbb E[e^{f(X)}]\big) - \mathbb E[f(X)] \geqslant 0, $$ which seems no help.

Another idea is to consider the random variable $Y = f(X)$ rather than $X$ itself. Then it may be reasonable to consider $$ \log\big( \mathbb E[e^Y]\big) - \mathbb E[Y] \leqslant \mathrm{Var}(Y). $$ Unfortunately, this inequality does not hold for arbitrary $Y$. Also, the Lipschitz property of $f$ becomes implicit here.

Any suggestions are appreciated. Even the solution in the case $d=1$ is fine.

if $g(n) = \frac{f(n+1) - f(n)}{f(n)}$ is unbounded at $[0, \infty]$, Then $\sum_{n \in N}\frac{2^n}{f(n)}$ converges.

Posted: 01 Jul 2022 03:04 PM PDT

Let $f$ be an unbounded and non decreasing, positive function at $[0, \infty]$.

I am trying to understand if $g(n) = \frac{f(n+1) - f(n)}{f(n)}$ is unbounded at $[0, \infty]$ (but bounded at $[0, a]$ for every positive $a$), Then $\sum_{n \in N}\frac{2^n}{f(n)}$ converges.

One such function satisfyes $g(n)$ can be defined as $f(0) = 2, f(n+1) = f(n)^2$.

Then we get:

$g(n) = \frac{f(n+1) - f(n)}{f(n)} = \frac{f(n)^2 - f(n)}{f(n)} =\frac{f(n)(f(n) - 1)}{f(n)} = f(n) -1 \rightarrow\infty $

And if we define:

$f(n) = 2^{2n}$

We get:

$g(n) = \frac{2^{2(n+1)} - 2^{2n}}{2^{2n}} = \frac{2^{2n+2} - 2^{2n}}{2^{2n}} = \frac{3*2^{2n}}{2^{2n}} = 3$ and $g$ is bounded.

$f(n) = 2^{2^n}$, however, satisfyes $g$.

It seems like the family of functions satisfy $g$ grows extremly fast.

Im not sure how to use any of the theorems to show convergence of the series.

Hints will be appericiated.

Show there is a positive complete measure $m$ on a $σ$-algebra $\mathfrak{M}$ in $\mathbb{R}^k$ such that $m(W)={\rm vol}(W)$ for every $k$-cell $W$.

Posted: 01 Jul 2022 03:16 PM PDT

My Background: Walter Rudin's Principles of Mathematical Analysis Chapter 1-7 & 11.

Source: The problem arises from Walter Rudin's Real & Complex Analysis:1)

2.20 Theorem There exists a positive complete measure $m$ defined on a $\sigma$-algebra $\mathfrak{M}$, with the following properties:

(a) $m(W) = \operatorname{vol}(W)$ for every $k$-cell $W$.

Rudin began the proof by defining a positive linear functional

$$ \Lambda \, : \, C_c(\mathbb{R}^k) \to \mathbb{R} \, : \, \Lambda f = \lim_{n\to\infty} \Lambda_n f, $$

where $\Lambda_n f = 2^{-nk} \sum_{x\in P_n} f(x)$. ($f$ here is real, and $P_n$ is the set of all $x\in\mathbb{R}^k$ whose coordinates are integral multiples of $2^{-n}$.) Then he goes:2)

To prove (a), let $W$ be the open cell 2.19(4), let $E_r$ be the union of those boxes belonging to $\Omega_r$ whose closures lie in $W$, choose $f_r$ so that $\overline{E}_r \prec f_r \prec W $, and put $g_r = \max\{f_1, \ldots, f_r\}$. Our construction of $\Lambda$ shows that

$$ \operatorname{vol}(E_r) \leq \Lambda f_r \leq \Lambda g_r \leq \operatorname{vol}(W). \tag{4} $$

As $r \to \infty$, $\operatorname{vol}(E_r) \to \operatorname{vol}(W)$, and

$$ \bbox[background:#FFE600;padding:5px;]{\Lambda g_r = \int g_r \, dm} \to m(W) \tag{5} $$

by the monotone convergence theorem, since $g_r(x) \to \chi_{W}(x)$ for all $x \in \mathbb{R}^k$. Thus $m(W) = \operatorname{vol}(W)$ for every open cell $W$, and since every $k$-cell is the intersection of a decreasing seqeunce of open $k$-cells, we obtain (a).

My Question: How could $\Lambda g_r = \int g_r \, dm$ be justified without $m(E) = \operatorname{vol}(E) = \prod_{i=1}^k (\beta_i-\alpha_i)$ for every $k$-cell $E$, which is precisely what we are trying to show?

Any help would be greatly appreciated.

A Transylvanian Hungarian MC Problem.

Posted: 01 Jul 2022 02:59 PM PDT

question

Show that there exist infinitely many non similar triangles such that the side-lengths are positive integers and the areas of squares constructed on their sides are in arithmetic progression.

Solution

let the sides be $a,b,c$ , let $a \ge b\ge c$ so , we have $a^2+c^2=2b^2$. put $a/b=x ,c/b=y, x,y>0$ so , we get $x^2+y^2=2$....() , with $x,y\in Q, x,y>0$ now , consider a particular solution $(p,q)$. consider the slope (k) of the line joining $(p,q)$ & $(x,y)$. $k=\frac{y-q}{x-p}$ , now express $y$ in terms of $k,p,q$ and put that in (). then appying Vieta's relation , we can easily solve the problem.we must keep in mind the problem conditions for choosing $k$ carefully. a such set of solution is given by $(a,b,c)=(k^2-1+2k,k^2+1,k^2-2k-1)$. ---------(###) we must choose $k\in N , [k\ge5] $ carefully such that , they are consistent with the problem condition. it is quite obvious that , we can make infinitely many non-similar triangles satisfying the condition from this family with suitable values of $k$ [the proof of the last sentence is trivial ,so i am not posting it. it uses basic divisibility and triangle inequality

Source

https://artofproblemsolving.com/community/c6h464522p2602940

Doubt

Please some body explain how vieta relation apply here and how the equation -----------(###) comes And why $k\in N , [k\ge5] $.

Existence and Uniqueness of the solutions to SDE with locally Lipschitz coefficient and linear growth

Posted: 01 Jul 2022 02:54 PM PDT

The following theorem states the existence and uniqueness of a strong solution for a class of SDEs.

Theorem. For the SDE $$ \mathrm{d} X=G(t, X(t)) \mathrm{d} t+H(t, X(t)) \mathrm{d} W(t), \quad X\left(t_{0}\right)=X_{0}, $$ assume the following hold.
(1) Both $G(t, x)$ and $H(t, x)$ are continuous on $(t, x) \in\left[t_{0}, T\right] \times \mathbb{R}$.
(2) The coefficient functions $G$ and $H$ satisfy the Lipschitz condition $$ |G(t, x)-G(t, y)|+|H(t, x)-H(t, y)| \leq K|x-y| . $$ (3) The coefficient functions $G$ and H satisfy a growth condition in the second variable, $$ |G(t, x)|^{2}+|H(t, x)|^{2} \leq K\left(1+|x|^{2}\right), $$ for all $t \in\left[t_{0}, T\right]$ and $x \in \mathbb{R}$.
Then the SDE has a strong solution on $\left[t_{0}, T\right]$ that is continuous with probability 1 and $$ \sup _{t \in\left[t_{0}, T\right]} \mathbb{E}\left[X^{2}(t)\right]<\infty, $$ and for each given Wiener process $W(t)$, the corresponding strong solutions are pathwise unique, which means that if $X$ and $Y$ are two strong solutions, then $$ \mathbb{P}\left[\sup _{t \in\left[t_{0}, T\right]}|X(t)-Y(t)|=0\right]=1 . $$

(I took it from Dunbar, S.R. Mathematical Modeling in Economics and Finance: Probability, Stochastic Processes, and Differential Equations; Vol. 49, American Mathematical Soc., 2019.)

Anyway, in my case I have an SDE where $H$ is constant and $G :\mathbb R_+\times\mathbb R\to\mathbb R$ is such that $G(t, X(t))=F(t)X(t)$ where $F(t)=\frac{1}{\cos t+ \sin t}$. Let $t_0=0$, $T>0$ and let $\mathcal A$ be the domain of $F(t)$, i.e. $\mathcal A=\left\{t|\cos t + \sin t\neq0\right\}$. My question is: does the above SDE admits a unique strong solution?

My attempt

Point (1) is satisfied because $G$ is continuous $(t, x) \in\left(\left[0, T\right]\cap\mathcal A\right) \times \mathbb{R}$. Point (2) is not true globally but for a local Lipschitz condition if $$K=\sup_{t\in\left(\left[0, T\right]\cap\mathcal A\right)}F(t)$$ Similar consideration can be done for point (3).

Can I infer the existence of a strong solution on $\left[0, T\right]\cap\mathcal A$? I think that the answer is not so obvious because I replace a global Lipschitz condition, in point (2), with a local one. What do you think?

Integration of a function approximated by a nth order polynomial

Posted: 01 Jul 2022 02:27 PM PDT

I've been playing with Simpson's rule and a thing came up to my mind. The rectangular rule is a 0th order polynomial approximation of integration. The trapezoidal rule is 1st. Simpson's rule is 2nd. Then what about nth? I've been working on this issue more than 2 weeks and couldn't get the answer. How could I solve this?

What I did is below.

approach 1

A nth order polynomial function passing $n+1$ points $(x_i, y_i) (0 \leq i\leq n)$ can be written by Lagrange polynomial as

$$ g(x) = \sum_{j=0}^{n} y_i \displaystyle \prod_{i=0 \atop i \neq j}^{n} \frac{x-x_i}{x_j - x_i}$$

So, if $y=f(x)$, $g(x)$ is a polynomial approximation of $f(x)$. So, the integration of $f(x)$ can be written as follow.

$$ \int_{x_0}^{x_n} f(x) dx \simeq \int_{x_0}^{x_n} g(x) dx = \int_{x_0}^{x_n} \sum_{j=0}^{n} f(x_i) \displaystyle \prod_{i=0 \atop i \neq j}^{n} \frac{x-x_i}{x_j - x_i} dx \\ = \sum_{j=0}^{n} f(x_i) \displaystyle \prod_{i=0 \atop i \neq j}^{n} (x_j - x_i)^{-1} \int_{x_0}^{x_n} \prod_{i=0 \atop i \neq j}^{n} (x-x_i) dx $$

Here, we suppose $x_i = x_0 + hi$ where $h$ is constant, $z = \frac{x - x_0}{h}$ and $z_i = \frac{x_i - x_0}{h} = i$ and the equation can be simplified.

$$\sum_{j=0}^{n} f(x_i) \displaystyle \prod_{i=0 \atop i \neq j}^{n} (x_j - x_i)^{-1} \int_{x_0}^{x_n} \prod_{i=0 \atop i \neq j}^{n} (x-x_i) dx \\ = \sum_{j=0}^{n} f(x_i) \displaystyle \prod_{i=0 \atop i \neq j}^{n} (j - i)^{-1} h \int_{0}^{n} \prod_{i=0 \atop i \neq j}^{n} (z-i) dz$$

$f(x_i)$ was left intact intentionally. Solving equation boils down to solving $\int_{0}^{n} \displaystyle \prod_{i=0 \atop i \neq j}^{n} (z-i) dz$ which is quite difficult.

$S(z, j) := \displaystyle \prod_{i=0 \atop i \neq j}^{n} (z-i)$ can be written as follows.

$$S(z, j) = \displaystyle \prod_{i=0 \atop i \neq j}^{n} (z-i) = \sum_{i=0}^{n} a_i z^i$$

$a_i$ is constant. Notice the upper value of $\sum$ is $n$ for the number of $z$s in $\prod$ is $(n - 0 + 1) - 1 = n$ because $i \neq j$.

It is trivial $S(z, j) = 0$ when $z = 0, 1, ... , n$ but $\neq j$. So the following is true.

$$\sum_{i=0}^{n} a_i z^i = 0 \,(z = 0, 1, ... , n \cap z \neq j)$$

The integration can be calculated as below.

$$\int_{0}^{n} \displaystyle \prod_{i=0 \atop i \neq j}^{n} (z-i) dz = \int_{0}^{n} S(z, j) dz = \int_{0}^{n} \sum_{i=0}^{n} a_i z^i dz \\ = \big[\sum_{i=0}^{n} a_i \frac{z^{i+1}}{i+1}\big]_{0}^{n} \\ = \sum_{i=0}^{n} a_i \frac{n^{i+1}}{i+1}$$

To solve this, evaluating $a_i$ is required. $a_i$ can be evaluated as below.

$$\sum_{i=0}^{n} a_i z^i = 0 \,(z = 0, 1, ... , n \cap z \neq j) \\ \Leftrightarrow \displaystyle {\begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 1 & 1 & 1 & \cdots & 1 \\ 1 & 2 & 2^{2} & \cdots & 2^{n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & n & n^{2} & \cdots & n^{n} \end{pmatrix}} {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\\\vdots \\a_{n}\end{pmatrix}} = {\begin{pmatrix}b_{0}\\b_{1}\\b_{2}\\\vdots \\b_{n}\end{pmatrix}} \\ (b_i = \begin{cases} S(j,j)\,\,(i = j) \\ 0 \,\,(i \neq j) \end{cases} ) $$

As $a_n = 1$ and $a_0 = \prod_{i=0 \atop i \neq j}^{n} -i$ the equation can be written as follows.

$$\sum_{i=0}^{n} a_i z^i = 0 \,(z = 0, 1, ... , n \cap z \neq j) \\ \Leftrightarrow \sum_{i=0}^{n-1} a_i z^i = -z^n \,(z = 1, 2, ... , n \cap z \neq j) \\ \Leftrightarrow \displaystyle {\begin{pmatrix} 1 & 1 & 1 & \cdots & 1 \\ 1 & 2 & 2^{2} & \cdots & 2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & n & n^{2} & \cdots & n^{n-1} \end{pmatrix}} {\begin{pmatrix}a_{1}\\a_{2}\\\vdots \\a_{n-1}\end{pmatrix}} = {\begin{pmatrix}b_{1}\\b_{2}\\\vdots \\b_{n-1}\end{pmatrix}} \\ (b_i = \begin{cases} S(j,j) - z^j\,\,(i = j) \\ -z^i \,\,(i \neq j) \end{cases} ) $$

The leftmost matrix is Vandermonde's matrix and multiplying its inverse matrix to the left of both sides gives $a_i$. However, the inverse matrix is quite complicated and I find it pretty hard to calculate (Inverse of Vandermonde's matrix).

approach 2

I guessed $i \neq j$ makes the problem difficult. So I went for deleting it.

$$ \int_{0}^{n} \displaystyle \prod_{i=0 \atop i \neq j}^{n} (z-i) dz = \int_{0}^{n} \frac{\displaystyle \prod_{i=0}^{n} z-i}{z-j} dz$$

The limit of $\frac{\prod_{i=0}^{n} z-i}{z-j}$ when $z$ approaches $j$ is given by L'hospital's rule, which is finite.

$$ \frac{(\prod_{i=0}^{n} z-i)'}{(z-j)'}|_{z=j} = (\prod_{i=0}^{n} z-i)'|_{z=j}$$

But after hours of thinking, I couldn't come up with how to solve $\int_{0}^{n} \frac{\displaystyle \prod_{i=0}^{n} z-i}{z-j} dz$. Two $z$s at both the enumerator and denominator. Integration by parts didn't work.

I've read the following articles and still didn't make it to find the answer.

  1. Riemann sum
  2. Newton–Cotes formulas
  3. Lagrange polynomial
  4. Vandermonde matrix
  5. Gaussian quadrature

How do I solve this?

How to find a generator of a cyclic group?

Posted: 01 Jul 2022 03:06 PM PDT

A cyclic group is a group that is generated by a single element. That means that there exists an element $g$, say, such that every other element of the group can be written as a power of $g$. This element $g$ is the generator of the group.

Is that a correct explanation for what a cyclic group and a generator are? How can we find the generator of a cyclic group and how can we say how many generators should there be?

No comments:

Post a Comment