Saturday, October 30, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Derivative of $x^TA$ wrt x

Posted: 30 Oct 2021 07:47 PM PDT

Wikipedia says that given $f(x) = x^TA$: $$\frac{\partial f}{\partial x} = A^T$$

but I am having trouble understanding this result. I tried doing the following:

$$f(x+h) - f(x) = h^TA$$

I am now stuck at making the above expression into a linear mapping of $h$. I'm not sure how to relate this to $A^T$. If $A$ were a vector, it would be easy because I could just switch the transpose and have $h^Ta = a^Th$. Since $A$ is a matrix, I am not sure how to proceed.

Injectivity of a function with matrix

Posted: 30 Oct 2021 07:46 PM PDT

I'm trying to prove injectivity of $g_a:$ \begin{array}[t]{lrcl} \mathcal{S}_d^{++}(\mathbb{R}) & \longrightarrow & \mathcal{S}_d(\mathbb{R}) \\ \Gamma & \longmapsto & a\Gamma - \Gamma^{-1} \end{array} for $a > 0$ (domain is symetric definite positive matrices and codomain symetric matrices).

Let $\Gamma \in \mathcal{S}_d^{++}(\mathbb{R})$ such that $\Sigma \in \mathcal{S}_d^{++}(\mathbb{R})$ tels que $g_a(\Gamma) = g_a(\Sigma)$.

Thus, $a(\Gamma - \Sigma) = \Gamma^{-1} - \Sigma^{-1}$ and, with $\Lambda := \sqrt{\Sigma}^{-1}\Gamma \sqrt{\Sigma}^{-1}$,

$a(\sqrt{\Sigma}\Lambda \sqrt{\Sigma} - \sqrt{\Sigma}\sqrt{\Sigma}) = \sqrt{\Sigma}^{-1}\Lambda^{-1}\sqrt{\Sigma}^{-1} - \sqrt{\Sigma}^{-1}\sqrt{\Sigma}^{-1}$ (since $\sqrt{\Sigma}^{-1}\sqrt{\Sigma}^{-1} = (\sqrt{\Sigma}\sqrt{\Sigma})^{-1}$)

so $a\sqrt{\Sigma}(\Lambda - I_d) \sqrt{\Sigma} = \sqrt{\Sigma}^{-1}(\Lambda^{-1} - I_d)\sqrt{\Sigma}^{-1}$

or $a\, \Sigma(\Lambda - I_d) \Sigma = \Lambda^{-1}- I_d$.

We then notice $\Lambda = \sqrt{\Sigma}^{-1}\Gamma \sqrt{\Sigma}^{-1}$ is still definite positive so $\Lambda = PDP^{-1}$ for $P \in \text{GL}_d(\mathbb{R})$ and $D = \text{diag}(\lambda_1, \dots, \lambda_d)$ where $\forall i \in \{1, \dots, d\}$, $\lambda_i > 0$.

Thus, $a\,\Sigma(PDP^{-1} - I_d)\Sigma = PD^{-1}P^{-1} - I_d$

so $a\,\Sigma P(D - I_d) P^{-1}\Sigma = P(D^{-1} - I_d)P^{-1}$

and $a\, P^{-1}\Sigma P(D - I_d)P^{-1}\Sigma P = D^{-1}- I_d$

and finally $a\,Q\,\text{diag}(\lambda_1 \,-\, 1, \dots, \lambda_d \,- \,1)\,Q = \text{diag}(\lambda_1^{-1} \,-\, 1, \dots, \lambda_d^{-1}\, - \,1) \text{ with } Q := P^{-1}\Sigma P$.

Now, if it was $a\,Q\,\text{diag}(\lambda_1 \,-\, 1, \dots, \lambda_d \,- \,1)\,Q^{-1} = \text{diag}(\lambda_1^{-1} \,-\, 1, \dots, \lambda_d^{-1}\, - \,1)$

I could conclude by equating the eigenvalues but it's not: how can I finish? I want to prove that all $\lambda_i = 1$ so that $\Lambda = I_d$ and $\Sigma = \Gamma$.

Is this possible? : A set A ⊆ R, and a function f : A → R that is discontinuous at an isolated point c ∈ A.

Posted: 30 Oct 2021 07:38 PM PDT

So my instructor is looking for the following:

Give an example of the following or explain why it doesn't exist: A set A ⊆ R, and a function f : A → R that is discontinuous at an isolated point c ∈ A.

By isolated point, he claims c ∈ A is not a cluster point of A.

My first thought before he pointed out what he meant by isolated point was the signum function where c = 0, but apparently I am not thinking about this correctly. Any ideas?

Approximating $\pi$ using Euler's Method Approximation with $y' = \frac{4}{1+t^2}$

Posted: 30 Oct 2021 07:38 PM PDT

The solution to the initial value problem $\frac{dy}{dt} = \frac{4}{1+t^2}, y(0) = 0$ has the value $y(1) = \pi$. How small would you have to make $\Delta t$ using euler's method to get two correct digits of $\pi = 3.1415926....$

My Attempt

Using $\Delta t = 1$, we get that $\pi =4.000$. $\Delta t=0.5$ produces $\Delta t = 3.6$. Euler's method is first order, which means that the global error is proportional to the step size : $e = A\Delta t^1$. If I want the error to hold two decimal places then $e < 10^{-2}$. $A$ can be calculated from a previous iteration of euler's method $A = (4-\pi)/1 \approx 0.858$. Which means that the needed $\Delta t = 0.0116279$. Apparently the answer is on the scale of $\Delta t = 2^{-7}$. Can someone please explain the problem with my approach?

Problems with deriving FT from FS using impulse train

Posted: 30 Oct 2021 07:46 PM PDT

There are two of the same equation given for the Inverse Fourier Transform:

$$ f(t) = {1 \over {2\pi}} \int_{-\infty}^{\infty} X(\omega) \, e^{j \omega t} \operatorname{d\omega} \;\;\;\;\;\text{or}\;\;\;\;\; f(t) = \int_{-\infty}^{\infty} X( f) \, e^{j 2 \pi f t} \operatorname{df}$$

which can be derived from the Fourier Series (1) $ \sum_{n=-\infty}^{\infty} c_n e^{jn\omega_0t} $ or (2) $ \sum_{n=-\infty}^{\infty} c_n e^{jn2\pi f_0t} $:

$$ \begin{align} (1) \implies x(t) = \sum_{n=-\infty}^{\infty} c_n e^{jn\omega_0t} \,&=\, \sum_{n=-\infty}^{\infty} \overbrace{\frac{1}{T_0} \int_{-\frac{T_0}{2}}^{\frac{T_0}{2}} x(t) \, e^{-jn\omega_0 t} \operatorname{dt}}^{c_n} \cdot e^{jn\omega_0 t} \,=\, \sum_{n=-\infty}^{\infty} \overbrace{\frac{\omega_0}{2\pi} \int_{-\frac{T_0}{2}}^{\frac{T_0}{2}} x(t) \, e^{-jn\omega_0 t} \operatorname{dt}}^{c_n} \cdot e^{jn\omega_0 t} \\ \,&=\, \sum_{n=-\infty}^{\infty} \overbrace{\lim_{\omega_0 \to 0} \frac{\omega_0}{2\pi} \int_{-\infty}^{\infty} x(t) \, e^{-jn\omega_0 t} \,\operatorname{dt}}^{c_n} \cdot e^{jn\omega_0 t} \,=\, \int_{-\infty}^{\infty} \operatorname{\frac{d\omega}{2\pi}} \Bigg[ \int_{-\infty}^{\infty} x(t) \, e^{-j\omega t} \,\operatorname{dt} \Bigg] \cdot e^{j\omega t} \\ \,&=\, \int_{-\infty}^{\infty} \overbrace{\Bigg[ \frac{1}{2\pi}\int_{-\infty}^{\infty} x(t) \, e^{-j\omega t} \,\operatorname{dt} \Bigg]}^{X(\omega)} \cdot e^{j\omega t} \,\operatorname{d\omega} \,=\, \mathcal{F}^{-1}\big\{X(\omega) \big\} \end{align} $$

$$ \begin{align} (2) => x(t) = \sum_{n=-\infty}^{\infty} c_n e^{jn 2 \pi f_0 t} \,&=\, \sum_{n=-\infty}^{\infty} \overbrace{\frac{1}{T_0} \int_{-\frac{T_0}{2}}^{\frac{T_0}{2}} x(t) \, e^{-jn 2 \pi f_0 t} \operatorname{dt}}^{c_n} \cdot e^{jn 2 \pi f_0 t} \,=\, \sum_{n=-\infty}^{\infty} \overbrace{f_0 \int_{-\frac{T_0}{2}}^{\frac{T_0}{2}} x(t) \, e^{-jn 2 \pi f_0 t} \operatorname{dt}}^{c_n} \cdot e^{jn 2 \pi f_0 t} \\ \,&=\, \sum_{n=-\infty}^{\infty} \overbrace{\lim_{f_0 \to 0} f_0 \int_{-\infty}^{\infty} x(t) \, e^{-jn 2 \pi f_0 t} \,\operatorname{dt}}^{c_n} \cdot e^{jn 2 \pi f_0 t} \,=\, \int_{-\infty}^{\infty} \operatorname{df} \Bigg[ \int_{-\infty}^{\infty} x(t) \, e^{-j 2 \pi f t} \,\operatorname{dt} \Bigg] \cdot e^{j 2 \pi f t} \\ \,&=\, \int_{-\infty}^{\infty} \overbrace{\Bigg[ \int_{-\infty}^{\infty} x(t) \, e^{-j 2 \pi f t} \,\operatorname{dt} \Bigg]}^{X(f)} \cdot e^{j 2 \pi f t} \,\operatorname{df} \,=\, \mathcal{F}^{-1}\big\{X(f) \big\} \end{align} $$

Either equation will result in the same $ x(t) $. Choosing $ \omega $ over $ f $, or vice versa, is just a matter of preference. They are identical :

$$ \mathcal{F}^{-1}\{X(\omega)\} \,=\, {1 \over {2\pi}} \int_{-\infty}^{\infty} X(\omega) \, e^{j \omega t} \operatorname{d\omega} \;=\; \int_{-\infty}^{\infty} X(f) \, e^{j 2 \pi f t} \operatorname{df} \,=\, \mathcal{F}^{-1}\{X(f)\}$$


An alternative way of deriving FT from FS is to use the sifting property of an impulse train (or Dirac Comb). Knowing that

$$ \delta(at)=\frac{1}{|a|}\delta(t) \,\,\implies\,\, \delta(f) = \delta \Big(\frac{\omega}{2\pi} \Big) = 2\pi \, \delta(\omega) \;\;\;\;\;\text{and}\;\;\;\;\; \operatorname{df} = \operatorname{d \frac{\omega}{2\pi}} = \frac{1}{2\pi} \operatorname{d\omega} $$

I can rewrite the FS $ \sum_{n=-\infty}^{\infty} c_n e^{jn2\pi f_0t} $ in terms of an impulse train :

$$ \begin{align} (2) \implies x(t) \,&=\, \sum_{n=-\infty}^{\infty} c_n \, e^{jn \omega_0 t} \,=\, \sum_{n=-\infty}^{\infty} c_n \, e^{jn 2 \pi f_0 t} \,=\, \sum_{n=-\infty}^{\infty} c_n \, \bigg[ \, \int_{-\infty}^{\infty} e^{j2\pi f_0t} \cdot \delta(f-nf_0) \,\operatorname{df} \,\bigg] \\ &= \int_{-\infty}^{\infty} \overbrace{\sum_{n=-\infty}^{\infty} c_n \, \delta(f-nf_0)}^{X(f)} \cdot e^{j2\pi ft} \operatorname{df} \,=\, \int_{-\infty}^{\infty} X(f) \cdot e^{j2\pi ft} \operatorname{df} \,=\, \mathcal{F}^{-1} \big\{X(f)\big\} \\ \end{align} $$

All good and make sense.... until I continue the derivation by rewriting the equation in terms of $ \omega $ from of $ f $ :

$$ \begin{align} (1) &\implies \int_{-\infty}^{\infty} \overbrace{\sum_{n=-\infty}^{\infty} c_n \, \delta\bigg(\frac{\omega}{2\pi}-n\frac{\omega_0}{2\pi}\bigg)}^{X(\omega)} \cdot e^{j\omega t} \operatorname{d}\bigg(\frac{\omega}{2\pi} \bigg) \\ &=\, \int_{-\infty}^{\infty} \overbrace{\sum_{n=-\infty}^{\infty} c_n \, 2\pi \,\delta\bigg(\omega-n \omega_0 \bigg)}^{X(\omega)} \cdot e^{j\omega t} \bigg(\frac{1}{2\pi} \bigg) \operatorname{d \omega} \\ &= \int_{-\infty}^{\infty} \overbrace{\sum_{n=-\infty}^{\infty} c_n \, \,\delta\bigg(\omega-n \omega_0 \bigg)}^{X(\omega)} \cdot e^{j\omega t} \operatorname{d \omega} \,=\, \underbrace{\int_{-\infty}^{\infty} X(\omega) \cdot e^{j\omega t} \operatorname{d\omega}}_{\text{where is my}\; {1 / (2\pi)} \;\text{???}} \,=\, \mathcal{F}^{-1} \big\{X(\omega)\big\} \end{align} $$

We know there has to be a $ \frac{1}{2\pi} $ scaling factor whenever we write a FT in terms of $ \omega $, but that factor disappears if I derive FT from FS using $ \delta $ impulse train instead. Could someone explain what went wrong? I always feel dirty whenever I see a $ \delta $ in my equation.

Let G be a finite abelian group, show that there exist cyclic subgroups.

Posted: 30 Oct 2021 07:29 PM PDT

Let G be a finite abelian group, show that there exist cyclic subgroups $H_1, ..., H_n$ such that

$$G = H_1 ⊕ · · · ⊕ H_n.$$

Hint: Use induction in group order.

Why $\tau$ is open?

Posted: 30 Oct 2021 07:49 PM PDT

Let $X$ a topologic space and $U\subseteq X$, then $U\in \tau \iff$ for all $A⊆X,$ $\overline{U\cap\bar{A}}=\overline{U\cap A}.$

I already prove that if $U$ is an open subset of $X$, then the equality holds using that $x\in\bar{A} \iff$ every open set $U$ containing $X$ intersects $A$.

But im stucked trying to prove that in fact, $U$ is an open set if I assume equality. My first try was to use the same theorem but I didn't get anything and then I tried to prove that $U$=int$U$ but is that the way to prove it or I'm missing something?

EXTRA: I know that $\overline{U\cap\bar{A}}=\overline{U\cap A}$ tells me that $U$ is very close to $\bar{A}$ as well as $A$, so in fact, $U$ must be open :c Any help will be appreciated.

unique equilibrium in a perturbed system

Posted: 30 Oct 2021 07:37 PM PDT

In today's lecture, I see that if $x_*$ is an asymptotic stable and hyperbolic equilibrium of the $\dot{x}=a(x), \, x\in\mathbb{R}^n$. But then prof said that "it's obvious" if we give a small $\epsilon$ s.t. $\dot{x}=a(x)+\epsilon b(x)$, then we can obtain another equilibrium $x_{**}$ near $x_*$ and it is unique. We assume $a,b$ are all smooth.

I feel this is obvious since we move the system a little bit by $\epsilon$, but I wonder how to write out a proof rigorously.

Adjoints for the restriction of category-valued representations of groups

Posted: 30 Oct 2021 07:22 PM PDT

Setup. Let $G$ be a group and let $\mathscr{A}$ be a category. We denote the category of functors from $G$ to $\mathscr{A}$ by $[G, \mathscr{A}]$ and think of these functors as $\mathscr{A}$-valued representation of $G$. (More explicitely, such a representation consists of an object $X$ of $\mathscr{A}$ and a homomorphism of groups from $G$ to $\operatorname{Aut}_{\mathscr{A}}(X)$.)

Let now $H$ be a subgroup of $G$. The inclusion map $i \colon H \to G$ can be regarded as a functor, which then induces a functor $$ \operatorname{res}^G_H ≔ i^* \colon [G, \mathscr{A}] \longrightarrow [H, \mathscr{A}] \,. $$ This functor restricts the $\mathscr{A}$-valued representations of $G$ to $\mathscr{A}$-valued representations of $H$.

Question. Under what conditions (on $\mathscr{A}$) does the restriction functor $\operatorname{res}^G_H$ admit a left adjoint, resp. a right adjoint?


This question is motivated by Exercise 2.1.16 in Tom Leinster's Basic Category Theory, which deals with $[G, \mathrm{Set}]$ and $[G, \operatorname{Vect}(\mathbb{k})]$.

I believe that I understand the following two special cases:

  • For every ring $R$ we have for $\mathscr{A} = \operatorname{Mod}(R)$ that $[G, \mathscr{A}] ≅ \operatorname{Mod}(R[G])$. We thus have the usual adjunctions $$ R[G] \otimes_{R[H]} (-) ⊣ \operatorname{res}^G_H ⊣ \operatorname{Hom}_{R[H]}(R[G], -) \,. $$
  • In the case of $\mathscr{A} = \mathrm{Set}$ we have similarly $[G, \mathscr{A}] ≅ G\textrm{-}\mathrm{Set}$ and adjunctions $$ G \times_H (-) ⊣ \operatorname{res}^G_H ⊣ \operatorname{Hom}_H(G, -) \,. $$

However, I don't expect these examples to generalize to an arbitrary category $\mathscr{A}$, as they both rely on some notion of tensor-hom adjuction.

What is the meaning of $n({∅})=1?$ Why and how $(\{∅\})$ equal to $1$?

Posted: 30 Oct 2021 07:07 PM PDT

If $n(A)=1$,then it is a Singleton set. And $n(∅)=0$ and $n({∅})=1$. What is the meaning of $n({∅})=1?$ Why and how $(\{∅\})$ equal to $1$?

Please, somebody answer.

Why does the Modulo Operation in Programming Languages Simply Return the Remainder of Euclidean Division Instead of the Actual Modulo Operation?

Posted: 30 Oct 2021 07:03 PM PDT

While using JavaScript, I was disappointed to find out that the modulo operation % does not seem to work the way I would expect. For example, -1 % 3 returns -1 instead of 2 as I would expect. Modular arithmetic has been described to me as "clock" arithmetic in the past. With this mental model in mind, if I were to rotate 1 unit counterclockwise on a clock with three tick marks (labeled 0 through 2) from tick 0, I should end up at 2, right? This interpretation of modular arithmetic also seems to make more sense in general. Why then, would a programming language choose to implement the mod operator this way? Have I misinterpreted what modular arithmetic means mathematically?

In proving $(1/2)(A-A^T)$ is skew symmetric, how does the negative sign get in front of the expression?

Posted: 30 Oct 2021 07:30 PM PDT

Prove the matrix $\frac12(A-A^T)$ is always skew symmetric.

Proof:

$$\left(\frac12(A-A^T)\right)^T \,=\,\left(\frac12A-\frac12A^T)\right)^T \,=\,\frac12A^T-\frac12A \,=\,\frac12(A^T-A).$$ I understand this, but the proof I see has one more line: $$=-\frac12(A-A^T)$$ And I do not understand that. Can someone please explain?

Calculating the discriminant of $\mathbb{Q}(\sqrt{d})/\mathbb{Q}$. What is going wrong?

Posted: 30 Oct 2021 07:28 PM PDT

My main issue is with the case when $d \equiv 1 \mod 4$ is a squarefree integer. In this situation I know that the ring of integers in $\mathbb{Q}(\sqrt{d})$ is $\mathbb{Z}[(\sqrt{d} + 1)/2]$. Now, we observe that $\mathbb{Z}[(\sqrt{d} + 1)/2]$ has a $\mathbb{Z}$-basis given by $\{1, (1 + \sqrt{2})/2\}$ (I suspect that this is my mistake).

Here I will take the definition of trace to be the sum of the galois conjugates. The Galois group of $\mathbb{Q}(\sqrt{d})/\mathbb{Q}$ is $\mathbb{Z}/2\mathbb{Z}$, generated by the automorphism sending $\sqrt{d} \mapsto -\sqrt{d}$. As such, $\operatorname{Tr}(1) = 2$ and $\operatorname{Tr}((1 + \sqrt{d})/2) = 1/2 + 1/2 = 1$.

Similarly, we see that $$\left(\frac{1 + \sqrt{d}}{2} \right)^2 = \frac{1 + 2\sqrt{d} + d}{2}$$ has trace $1 + d$. Hence, the discriminant is the determinant of the matrix $$\begin{bmatrix} 2 & 1\\ 1 & 1 + d \end{bmatrix}$$ which is $2d + 1$. Now, Wikipedia states that this discriminant is $d$, which leads me to question my solution.

Where is my mistake?

Thanks!

EDIT: I'll add that I'm defining the discriminant to be $\operatorname{det}(\operatorname{Tr}(x_ix_j))$ where $\{x_i\}$ is an $\mathbb{Z}$-basis of the ring of integers. The alternative definition, which defines it to be $[\operatorname{det}(x_i^{\sigma_j})]^2$ gives the solution more clearly but I am nevertheless unsettled by the fact that my above solution is wrong.

Can I Plug In Infinity to Verify the Equation: $\tan \left( \frac{\pi}{2} - x\right) = \cot x$

Posted: 30 Oct 2021 07:45 PM PDT

When working on the following equation for my pre-calculus class a couple nights ago

Verify that: $$ \tan \left(\frac{\pi}{2} - x\right) = \cot x $$

I decided to answer the question by using the fact that $$ \tan \left(\frac{\pi}{2} - x\right) = \frac{\tan \frac{\pi}{2} - \tan x}{1 + \tan \frac{\pi}{2} \tan x}. $$

I realized that using this formula, I would not help me answer the question for my math class, because, as we learned in class, $\tan \frac{\pi}{2}$ is undefined. However, I (knowing erroneously) proceeded with this method, plugging in $\pm \infty$ for $\tan \frac{\pi}{2}$ yielding $$ \frac{\pm \infty - \tan x}{1 + \pm \infty \tan x} $$

which can be simplified down to $$ \frac{1 - 0}{0 + \tan x} = \frac{1}{\tan x} = \cot x $$

by dividing the numerator and the denominator by $\pm \infty$.

In an attempt to justify the fact that $\frac{\infty}{\infty}$ should be 1 and $\frac{1}{\infty}$ and $\frac{\tan x}{\infty}$ should be 0, I turned to limits. Whilst not knowing too much calculus, I learned last year in Algebra II that $\infty$ can be somewhat approximated with limits. As such, I defined a new constant $\beta$ as $$ \beta = \lim_{x \rightarrow \infty} x $$

and $\alpha$ as $\beta$'s reciprocal. From my understanding of limits, that just means $\beta$ is essentially some very large number, so I should still be able to do arithmetic with it ($\alpha \beta = 1$, $\beta - \beta = 0$, etc.).

So then my equation becomes: $$ \frac{\beta - \tan x}{1 + \beta \tan x} = \frac{\beta - \tan x}{1 + \beta \tan x} \cdot \frac{\alpha}{\alpha} = \frac{1 - \alpha \tan x}{\alpha + \tan x} $$

which is essentially $\cot x$ at the end, because $\alpha \approx 0$.

I have been constantly told not to use infinities in equations since elementary school, yet here it seems to yield the correct answer. Why is that, and when can these infinities be used in equations? If I got this wrong, where did I err?

Integral of the log of the absolute value over a complex disc

Posted: 30 Oct 2021 07:47 PM PDT

Stuck on this integration problem, it's probably really obvious. Talk to me like someone who's taken complex analysis but forgets the rules. Advisor said something about the Cauchy Integral Formula, but I'm not sure how that applies.

$U(z)= \frac{1}{2 \pi} \int^{2 \pi} \log \frac{1}{|z - a - r e ^{i \theta}|} d \theta$

The given solution is $\log \frac{1}{r}$ for $|z - a| \leq r$ and $\log \frac{1}{|z - a|}$ for $|z - a| \geq r$. My first obvious step was to make the substitution $w = z - a$ so the new problem is $U(w)= \frac{1}{2 \pi} \int^{2 \pi} \log \frac{1}{|w - r e ^{i \theta}|} d \theta$

Case 1: $|w| > r$. Write the function $h(\zeta)= \log\frac{1}{|w - \zeta|}$. h is harmonic on the disc $D(0, r + \epsilon)$ for some $\epsilon > 0$, so the Mean Value Property for harmonic functions says $h(0)= \frac{1}{2 \pi} \int_0^{2 \pi} \log \frac{1}{|w - r e ^{i \theta}|} d \theta$. Thus $U(w)=h(0)=\log \frac{1}{|w|} = \log \frac{1}{|z - a|}$.

Case 2: $|w| \leq r$. In this case, the function $h(\zeta)=\log \frac{1}{|w − \zeta|}$ is not harmonic in the interior of the disc (it has a pole where $\zeta = w$), so the Mean Value Property does not apply. I can't figure out case 2, but I'm still working on it.

(Edit to fix the problem in case 1.)

Are Polynomials Simply Base-$x$?

Posted: 30 Oct 2021 07:18 PM PDT

Is it useful to think of polynomials as simply being numbers written in base-$x$? For example, given the polynomial $x^2 + x + 1$, when trying to solve for it, aren't we just asking for which base(s) this polynomial is equal to zero?

There exists $R>r$ such that $\overline{D}(a,r)\subset {D}(a,R)\subset U$

Posted: 30 Oct 2021 07:28 PM PDT

Let $U$ be an open set of $\mathbb{C}$ and $a\in U$ and $r>0$ we suppose that $$\overline{D}(a,r):\{z\in \mathbb{C}:|z-a|\leq r\}\subset U$$ can we say that there exists $R>r$ such that $$\overline{D}(a,r)\subset {D}(a,R)\subset U$$ with ${D}(a,R):\{z\in \mathbb{C}:|z-a|< R\}$

An idea please?

Geometric intuition behind Cauchy not having a mean

Posted: 30 Oct 2021 07:39 PM PDT

I'm trying to follow the geometric intuition behind the Cauchy distribution in the book "An introduction to probability theory and its applications" by Feller volume 2, first edition. On page 51, he describes the density of the Cauchy distribution as:

$$\gamma_t = \frac{t}{(t^2+x^2)\pi}$$

He then describes an experiment where a ray of light is emitted horizontally onto a vertical mirror from a source O. The light strikes the mirror at point A and the mirror can rotate about a vertical axis passing through A. This is shown in the figure below. The light reflects off the mirror and strikes the wall O is on at a distance $X$ from O.

enter image description here

The first assertion is that if $\phi$ is uniformly distributed between $(-\frac{\pi}{2}, \frac{\pi}{2})$, $X$ has the density given by $\gamma_t$. I managed to prove this. Then he says that its apparent that if this experiment is repeated $n$ times and the average taken, then this average $\frac{X_1+X_2+\dots X_n}{n}$ will have the same distribution as $X_1$. I didn't follow this part. I guess I could derive it from the density, but Feller seems to be hinting at some kind of obvious geometric intuition which I'm completely missing.

Finding one-sided limits of a double asymptotes function

Posted: 30 Oct 2021 07:07 PM PDT

Assume that $h(\gamma)=\dfrac{\gamma-a_0}{\gamma+a_0}\dfrac{\gamma-a_l}{\gamma+a_l}$ and $a_0 \leq 0, a_l \leq 0$, and $|a_l| > |a_0|$ Show that

  • $\underset{\gamma \rightarrow -a_0^{-}}{\lim} h(\gamma)=-\infty$
  • $\underset{\gamma \rightarrow -a_0^{+}}{\lim} h(\gamma)=\infty$
  • $\underset{\gamma \rightarrow -a_l^{-}}{\lim} h(\gamma)=\infty$
  • $\underset{\gamma \rightarrow -a_l^{+}}{\lim} h(\gamma)=-\infty$

I know there are two asymptotes in the graph at $\gamma=-a_0,-a_l$

How do I do this algebraically?

Real Analysis: Why do we take an arbitrary delta value to prove that a function is continuous

Posted: 30 Oct 2021 07:27 PM PDT

I was watching this video showing how to prove if a function is continuous and if it is not. I understand the steps except for one in 3:25 when he takes $ \delta = 1 $, and then he takes $$ \delta = \min \left \{ 1 , \frac \varepsilon { \max \{ | 2 a + 1 | , | 2 a - 1 | \} } \right \} \text . $$ What I'm having trouble understanding is why did he use $ \delta = 1 $ specifically, and why did he take that minimum with those two values. Especially, why $ \frac \varepsilon { \max \{ | 2 a + 1 | , | 2 a - 1 | \} } $?

Everything laid out perfectly at the end to get $ < \varepsilon $. So I would like to know why and how he made those choices.

I'm sorry for not formatting correctly, I am new in this stack.

Thanks in advance

Difference between Product and Cylindrical $\sigma-$ algebras?

Posted: 30 Oct 2021 07:19 PM PDT

What is the difference between the product and the cylinder $\sigma-$algebra? In wikipedia says that ``the cylindrical $\sigma-$algebra or product σ-algebra is a $\sigma-$algebra often used in the study either product measure or probability measure of random variables on Banach spaces." However, there is no definition or an intuitive explanation of what differs between them or it is me that I can not explain them.

Can anybody give an intuitive example?

I have one in mind that says:

Consider a set of messages $M=\{c,s\}$, where $c$ means continue the conversation and $s$ means stop the conversation Let $H_t=(M\times M)^{t-1}$, $t=1,2,...$ be the set of all pairs of messages in $M$ possibly sent before stage $t$ and let $H_{\infty}=(M\times M)^{\mathbb{N}}$, the standard way to provide these sets with a measurable structure is the following. Let $\mathcal{H}_{t}$ be the algebra over $H_{\infty}$ generated by cylinder sets of the form $h_{t−1}\times H_{\infty}$, where $h_{t−1}$ is a sequence in $H_t$ Ht. Let $\mathcal{H}_{\infty}$ be the $\sigma-$algebra over $H_{\infty}$, generated by the algebras $\mathcal{H}_t$ $t=1,2,...$ and $\mathbb{N}=\{\{1\},\{2\},\{1,2\}\}$ describes the sets of players possibly choosing $s$ at some stage.

If the set of hisoriries in the case of two players is this $M$ which is finite the $\sigma-$ algebras that are generated are cylinder ones according to the paradigm, why? Does this depend on the set $M$? if the set was not finite but it was a set like $[0,1]$ or $[0,+\infty)$ or we had $I$-players, namely $H_t=(\underbrace{M\times M\times\dots\times M}_{\text{# of $M$'s is $I$}})^{t-1}$ would the assumption of cylinder $\sigma-$ algebras still remain?

I have seen this answer is quite good, but are the Borel $\sigma-$ algebras cylindrical ones? can we explain this intuitiveley with the above example?

How should I evaluate $\int_0^{\infty}\int_0^{\infty}\frac{\sqrt[3]{x}}{1+\sqrt[3]{x}}e^{-y\pi\left(x^2+1/x^2+1\right)}\ \mathrm{d}y\ \mathrm{d}x$?

Posted: 30 Oct 2021 07:19 PM PDT

$$\int_0^{\infty}\int_0^{\infty}\frac{\sqrt[3]{x}}{1+\sqrt[3]{x}}e^{-y\pi\left(x^2+1/x^2+1\right)}\ \mathrm{d}y\ \mathrm{d}x$$

$$\int_0^{\infty}\frac{\sqrt[3]{x}}{1+\sqrt[3]{x}}\frac{1}{\pi\left(x^2+1/x^2+1\right)}\ \mathrm{d}x$$

$$\frac1{\pi}\int_0^{\infty}\frac{\sqrt[3]{x}}{1+\sqrt[3]{x}}\frac{x^2}{x^4+x^2+1}\ \mathrm{d}x$$

then substituting $t^3=x$

$$\frac3{\pi}\int_0^{\infty}\frac{t^9}{(1+t)(t^{12}+t^6+1)}\ \mathrm{d}t$$

what should i do after this should i write $\frac{1}{1+t}$ as $\sum(-1)^kt^k$?

Why is there this relationship between the Collatz Conjecture and Riordan Arrays?

Posted: 30 Oct 2021 07:19 PM PDT

I was playing around with the Collatz conjecture the other day. Previously, I realized the problem could be shortened by only concerning yourself with odd numbers. If you do this, you can make a Collatz tree of only odd numbers. The tree starts at 1 and each nodes' children are determined by the function $$ C(p, n) = \frac{p2^{2n - p\bmod 3 + 1} - 1}{3} \cdot \frac{p \bmod 3}{p \bmod 3} $$ where $p$, the parent node value, is an odd number, and $n$ is a positive integer.

When I was playing around with the conjecture the other day, I decided make a function $\chi$ that takes in an odd integer and returns the $n$th value which indicates which child it is of its parent (for example, $\chi(5) = 2$ because $C(1, 2) = 5$).

I wrote the following code to brute force the values of $\chi$ from 1 to 99 in JavaScript:

outText = '';    for (let i = 1; i < 1e2; i += 2) {      let count = 0, current = 3 * i + 1;        while (current % 2 === 0) {          current /= 2;          count++;      }        outText += (count + count % 2) / 2 + ',';  }    console.log(outText);  

and got 1,1,2,1,1,1,2,1,1,1,3,1,1,1,2,1,1,1,2,1,1,1,2,1,1,1,3,1,1,1,2,1,1,1,2,1,1,1,2,1,1,1,4,1,1,1,2,1,1,1, as the output in the console. I went to the OEIS and searched up this sequence and got the A115362 sequence as a result. The OEIS website describes this sequence as Row sums of ((1,x) + (x,x^2))^(-1)*((1,x)-(x,x^2))^(-1) (using Riordan array notation).

I don't know much mathematics, so I naturally came here to ask what is the connection, if there is any, between these two sequences? Is the answer obvious? Also, this is my first time asking a question at StackExchange and I was wondering if any of you could give me tips about how to ask questions on this site. Thank you very much.

Where is the mistake? Finding an equation for the ellipse with foci $(1,2)$, $(3,4)$, and sum of distance to the foci equals to $5$.

Posted: 30 Oct 2021 07:04 PM PDT

Find an equation for the ellipse with foci $(1,2)$, $(3,4)$, and sum of distance to the foci equals to $5$.

We consider the foci in the coordinate system $XY$ such that $X=x-2$ and $Y=y-1-x$, the coordinates of the foci in this system are $(-1,0)$ and $(1,0)$, furthermore $2a=5$, the equation of the ellipse in $XY$ is \begin{equation} \left( \frac{X}{2.5}\right)^2 + \left( \frac{Y}{\sqrt{5.25}} \right)^2 = 1 \end{equation}

and this can be expressed in $xy$ as

$$\left( \frac{x-2}{2.5} \right) + \left( \frac{y-1-x}{\sqrt{5.25}}\right) = 1$$

I have made the graph of the last equation and it is not the case that foci are $(1,2)$ and $(3,4)$, so, can anyone help me to see the mistake please?

Essential Surface in $F\setminus\!\{\text{point}\} \times S^1$

Posted: 30 Oct 2021 07:20 PM PDT

So I am reading about essential surfaces, and I know that there is an essential torus in a punctured surface times $S^1$. I just don't see it? The only torus I can think of would be one $\partial$-parallel to the puncture times $S^1$. Where does it lie?

Secondly are there examples other than this and $F\times \{pt\}$ in the mapping torus of $F$, of closed essential surfaces in a $3$-manifold? I am thinking about broad categories like the examples I gave.

Thank you so much!

Existence of real analytic diffeomorphisms with prescribed values on a finite set

Posted: 30 Oct 2021 07:33 PM PDT

My question is the following: given a finite set $ F = \{ x_1, \dots, x_k \} \subset \mathbb{R} $ such that $x_i < x_{i+1}$ for all $i = 1, \dots, k-1$ and numbers $a_1, \dots, a_k \in \mathbb{R}$ such that $ a_i < a_{i+1}$ for $i=1,\dots,k-1 $, is it possible to find a real analytic diffeomorphism $f: \mathbb{R} \to \mathbb{R}$ such that $f(x_i) = a_i$?

I know that it's easy to find real analytic functions satisfying these conditions (for instance, using polynomial interpolation), but is there a way to get at least one that is a diffeomorphism (i.e. with $f'>0$ and surjective)? Thanks!

Power Set, Replacement or Infinity axioms unprovable in set theory

Posted: 30 Oct 2021 07:25 PM PDT

My question is related to the independence of the Power Set, Replacement, and Infinity Axioms. Can we show that Power Set, for example, is unprovable in $ZFC-P$? Is $\neg$Power Set unprovable in $ZFC-P$? (These questions go for Replacement and Infinity as well). I read that we cannot show that $\text{Con}(ZFC -P) \rightarrow \text{Con}(ZFC)$. Does this mean we can only show that either $ZFC-P \vdash \text{Power Set}$ or $ZFC-P \vdash \neg\text{Power Set}$ but not both?

Here's my intuition for these questions without looking at the model theory for it. It seems like the Power Set Axiom is unprovable from the other axioms because we can use the $H(\kappa)$ as a model for $ZFC-P$. But I don't think that $\neg$Power Set is provable from the others because that would imply that $\text{Incon}(ZFC)$. And if that were true, then we probably wouldn't work in $ZFC$. So I'm guessing that either $\neg$Power Set is unprovable or that we are unable to decide this finitistically because of some Second Incompleteness Theorem argument.

Is it informally believed that Power Set, as well as Replacement and Infinity, are independent from the others despite us potentially not being able to show it finitistically?

And finally, we know that the following axiomatic systems for set theory are all equivalently consistent: $ZF^- \sim ZF \sim ZFC^- \sim ZFC$ (where $\Gamma^-$ is $\Gamma$ minus the Axiom of Foundation). That is if we assume the consistency of one of these sets of axioms, then we can conclude the others are also consistent. How far can we reduce the assumption of consistency within the axioms, i.e. we have reduced accepting the consistency of $ZFC$ to accepting the consistency of $ZF^-$, can we do this any further with Power Set, Replacement and Infinity?

Functions $f:\mathbb R\to\mathbb R$ satisfying $f\left(\frac{x+y}r\right)=\frac{f(x)+f(y)}s$ [closed]

Posted: 30 Oct 2021 07:29 PM PDT

Let $r$ and $s$ be distinct nonzero rational numbers. Find all functions $f:\mathbb R \to\mathbb R$ such that $f\left(\frac{x+y}r\right) = \frac{f(x)+f(y)}s$.

My attempt:

If we have $x=0$ and $y=0$, Then we have $f(0) = \frac{f(0)+f(0)}s \implies s=2$.

If we set $x=0$, we get $s\times f\left(\frac yr\right)-f(y)=f(0)$. $(1)$

If we set $y=0$, we get $s\times f\left(\frac xr\right)-f(x)=f(0)$. $(2)$

Setting $(1)$ and $(2)$ equal, $s\times f\left(\frac xr\right) - s\times f\left(\frac yr\right) = f(x) - f(y)$.

I don't know where this is going and need help.

Implicit Differentiation of Two Functions in Four Variables

Posted: 30 Oct 2021 07:03 PM PDT

Can someone please help with this homework problem: Given the equations \begin{align} x^2 - y^2 - u^3 + v^2 + 4 &= 0 \\ 2xy + y^2 - 2u^2 + 3v^4 + 8 &= 0 \end{align} find $\frac{\partial u}{\partial x}$ at $(x,y) = (2,-1)$.

In case it may be helpful, we know from part (a) of this question (this is actually part (b)) that these equations determine functions $u(x,y)$ and $v(x,y)$ near the point $(x,y,u,v) = (2, -1, 2, 1)$.

While I am taking a high-level multivariable calc class, I have not yet taken linear algebra or diffeq yet, so please refrain from using such techniques in your solutions.

Thanks a lot!

No comments:

Post a Comment