Sunday, February 27, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Finding a sufficient condition for a property of convex function.

Posted: 27 Feb 2022 10:21 AM PST

Let $f: \mathbb{R}_+ \to \mathbb{R}$ satistfy:

  1. $f'(x)<0$ for all $x \in \mathbb{R}_+$,
  2. $f''(x)>0$ for all $x \in \mathbb{R}_+$,
  3. $f(0)=1$ and $\displaystyle\lim_{x\to +\infty}f(x)=0$.

My questions are

  1. Is the following statement true? There exists $N>0$ such that the functions $\dfrac{xf'(x)}{f(x)}$ is increasing or equivalently $$x(f'(x))^2 > f(x)f'(x)+xf(x)f''(x), \forall x\geq N.$$
  2. If it is not true in general, I would like to find a sufficient condition to ensure it.

Thank you very much for any comments or discussion. Just one more thing, from my previous post, I know that, in general, the functions $\dfrac{xf'(x)}{f(x)}$ is not increasing on $\mathbb{R}_+$.

Mathematics behind Pep Guardiola's Tiki Taka Soccer

Posted: 27 Feb 2022 10:19 AM PST

Though I am an Arsenal fan and a few years ago, Unai Emery managed our side very wonderfully, though not giving a top-4 Finish in the 2018/19 season, he though used one of the footballing methods called Tiki-Taka. This Tiki-Taka soccer method was famously used by Pep Guardiola's 2-time treble-winning FC Barcelona since 2008 - 2012, and he still uses this method in Man City. I just want to ask, is there any mathematical secret behind the success of Tiki-Taka (like some Algebra, Geometry, Trigonometry, or even some Calculus formulas involved in it).

Can any vector not in a linear space be separated from it?

Posted: 27 Feb 2022 10:17 AM PST

For context, I was exploring a bit more this question I asked before.

Suppose that $E$ is a vector space such that its algebraic dual $E^\ast$ can separate point, i.e. for all $x\in E\setminus\{0\}$, there is $f\in E^\ast$ such that $f(x)\neq 0$.

I am wondering if the following can be proved or disproved :

If $S$ is a linear subspace of $E$, and $x\notin S$, then there is $f\in E^\ast$ such that $f(x)\notin f(S)$

I am not sure exactly on how to address this, but first it is obvious that such a $f$ will be such that $f(S)=\{ 0\}$, otherwise $f(S)=\mathbb R$. Therefore we are looking for $f\in E^\ast$ such that $f(x)\neq 0$ and $f(S)=\{0\}$. Otherwise, I thought about assuming the statement is wrong and then try to create a point $y\in S$ such that $f(x)=f(y)$ for all $f\in E^\ast$ which would prove that $x=y\in S$, maybe some sort of projection of $x$ on $S$ but I don't know how to do that from just $E^\ast$ and no norm. Any input would be much appreciated.

Properties of binomial coefficient

Posted: 27 Feb 2022 10:15 AM PST

I'm trying to prove that $$\binom{2n}{n} =\sum_{i=0}^{n}\binom{n}{i}^2 =\sum_{i=0}^{n}\binom{n}{i}\binom{n}{n-i}\text{.}$$ Okay, I understand, why $\binom{2n}{n}=\sum_{i=0}^{n}\binom{n}{i}\binom{n}{n-i}$. For the left-hand side, among $2n$ people, we select $n$ people. On the right-hand side, the committee can consist of $i$ boys and $n-i$ girls, for $i\in\{0,1,2,\dots,n\}$. We select $i$ boys and $n-i$ girls for each $i$.

My Question: I don't understand how $\sum_{i=0}^{n}\binom{n}{i}^2$ is an equivalent expression. I believe $\sum_{i=0}^{n}\binom{n}{i}^2=\sum_{i=0}^{n}\binom{n}{i}\binom{n}{i}$. How does this equate to the two expressions on either side? Can some give me an intuitive/"story-like" explanation (just like what I did above)?

Bracket notation on a matrix. What is $[A]$?

Posted: 27 Feb 2022 10:11 AM PST

I supose that this could be a silly question. But I don't know how to interpretate this term because this is the first time that I see this notation.

Normally I used the bracket of two matrixes as the commutator $[A,B]$

But I found in university notes the notation for a unique matrix $[A]$ and I don't know what is the definition. And obviously if I try to search in google what is [ ] when apply to matrixes all sites refer to the commutator.

Gradient of a dot product (getting the wrong answer!)

Posted: 27 Feb 2022 10:08 AM PST

I want to show that: (a,b are both vectors).

∇(𝑎⋅𝑏)=(𝑎⋅∇)𝑏+(𝑏⋅∇)𝑎+𝑎×(∇×𝑏)+𝑏×(∇×𝑎) I am trying to do this by using: 𝑎×(∇×𝑏)=∇(𝑎⋅𝑏)-b(𝑎⋅∇)

And: 𝑏×(∇×𝑎)=∇(b⋅a)-a(b⋅∇)

So when I sum everything up I get that (I use that a dot b = b dot a): ∇(𝑎⋅𝑏)=(𝑎⋅∇)𝑏+(𝑏⋅∇)𝑎+∇(𝑎⋅𝑏)-b(𝑎⋅∇)+∇(b⋅a)-a(b⋅∇), and then I use that (𝑎⋅∇)𝑏=b(𝑎⋅∇), and that (𝑏⋅∇)𝑎=a(b⋅∇), and all I have left is:

∇(𝑎⋅𝑏)=2∇(𝑎⋅𝑏)

Which of course is wrong! Where am I doing something wrong?

/Willingtolearn

How to show an exponential sequence diverges: $a = \left(1 + \frac{1}{n}\right)^{7n^{2}\log_{7}(n)}$?

Posted: 27 Feb 2022 10:11 AM PST

This is the sequence:

$$a = \left(1 + \frac{1}{n}\right)^{7n^{2}\log_{7}(n)}$$

I know that the sequence diverges. But how do I prove it?

Probability a certain container was picked, given information about the contents of the containers

Posted: 27 Feb 2022 10:02 AM PST

There are $n+1$ containers, marked $C_0, C_1, C_2, ..., C_n$, each with $n$ balls inside. We know that container $C_i$ has exactly $i$ white balls, and $n-i$ black balls. The probability of picking a specific container is $\frac{1}{n+1}$. We randomly pick a container, and take out $r$ balls. Of which $k$ are white, and $r-k$ are black. What is the probability we took these balls out of container $C_m$.

The answer I received from peers is: $$ P(C_m) = \frac{\binom{m}{k} \binom{n-m}{r-k}}{\binom{n+1}{r+1}}$$

However, this solution fails for edge cases. For example:

Let $n=2$, $r=k=1$, and we want to check the probability for container $C_0$. Container $C_0$ has exactly 2 black balls and 0 white balls. We take out 1 ball, of which 1 is white. What's the probability the container is container $C_0$? Logically, we can conclude - $0$. Yet algebraically this doesn't make sense, because $\binom{m}{k}$ fails (You can't choose 1 out of 0).

So is the answer I received from my peers correct?

Show that $arg_\tau(z)$ is not holomorphic in $\mathbb{C}^{\circ}_{\tau}$

Posted: 27 Feb 2022 10:01 AM PST

Let $\tau \in \mathbb{R}$, define $\mathbb{C}^{\circ}_{\tau} =${$z \notin \mathbb{C}: \tau \notin arg(z)$}. Show that $arg_\tau(z)$ is not holomorphic in $\mathbb{C}^{\circ}_{\tau}$.

This is how I did it,

$$z = x+iy = re^i\theta$$

$$arg(z) = \arctan(\frac{y}{x}) = u(x,y) + iv(x,y)$$

It's not holomorphic if it doesn't satisfy the cauchy riemann equations. But $v(x,y) = 0$, which would mean that $arg(z)$ is a constant number. I don't know how to show that it's holomorphic everywhere except for $\mathbb{C}^{\circ}_{\tau}$

Why is there a "+ C" when computing indefinite integrals? [closed]

Posted: 27 Feb 2022 09:57 AM PST

When evaluating definite integrals, there is a "+ c", and I do know that the curve resulting from the evaluation of the integral represents the area under that graph, but I am confused why the "+ c" is included. Wouldn't that change the area under the curve on some particular interval [a, b]?

Irreducible polynomials in fields & factor rings

Posted: 27 Feb 2022 10:20 AM PST

Let $I=f(\mathbb{Z}/5\mathbb{Z})[x]$ where $f=x^3+x+[1]$. How many elements are in the field $(\mathbb{Z}/5\mathbb{Z})[x]/I$?

Express $Y=[4]X^3+X+[3]$ in the form $Y=aX^2+bX+c$ where $X=x+I$ where $Y\in(\mathbb{Z}/5\mathbb{Z})[x]$ and $a,b,c\in\mathbb{Z}/5\mathbb{Z}$.

For the very first part I believe there are $5^3=125$ elements - in a past assignment I saw a similar question with $5$ replaced with $2$ and there were $8$ elements, and when replaced with $3$ there were $27$. But I'm not quite sure how to justify this and would like help to show that in the general case there are $n^3$ such elements. I can't see how this would follow naturally though?

For the second part I have no idea where to even start.

Total number of subsets in a set

Posted: 27 Feb 2022 10:07 AM PST

I was reading about subsets, in that, the article suggests the total number of subsets in a set is $2^n$, where $n$ is the number of elements in the set. For example - $\{1, 2, 3, 4, 5\}$ the total number of subsets is 32 because n is 5 and $2^5$ is 32 by multiplicative princple.

But the multiplicative principle is that if m events can happen in n ways then the possible outcomes are $m \times n$. So in the subsets problem if every element has 2 possibilities of it being in set or not being in set why is it not $2 \times 5$ and $2 ^ 5$? I know that the $2 ^ 5$ is correct but not able to visualize it.

what is the probability of rolling a 5 sided die 5 times and getting a 1 any number of times

Posted: 27 Feb 2022 10:20 AM PST

I am so confused; wouldn't it just be 1/5(5)? The probability of rolling a six-sided dice five times and getting a 1 at least one time is 5/6, but how would this work for a five-sided dice?

Showing Groups of Homomorphisms are Isomorphic

Posted: 27 Feb 2022 10:07 AM PST

While looking through prior exam problems for a group theory course, I encountered this question and am having some difficulty getting started.

Let $A,B,C$ be abelian groups. Let ${\rm Hom}(A \times B, C)$ be the set of all group homomorphisms from $A \times B$ to $C$. The question asks to show that the group ${\rm Hom}(A \times B, C)$ is isomorphic to ${\rm Hom}(A,C) \times{\rm Hom}(B \times C).$

A hint would be appreciated, however, I have generally struggled with proving groups are isomorphic and if there are some general tips/patterns to look for that would be appreciated as well.

Let $X$ be a normed space and $x_0, x_1 \in X$. Let $l \in (0,1)$ be fixed. Prove, that the sequence $x_{n+1}=l x_{n}+(1-l) x_{n-1}$ converges.

Posted: 27 Feb 2022 10:01 AM PST

I'm trying to prove the following:

Let $(X, ||.||)$ be a normed space and $x_0, x_1 \in X$. Let $l \in (0,1)$ be a fixed constant. Prove, that the sequence $x_{n+1}=l x_{n}+(1-l) x_{n-1}$ converges.

I started with noting, that because $X$ is a vector space, we know that $x_n\in X$ $\forall n\in \mathbb{N}$.

Now I need to show, that there exists $x\in X$ such that $||x_n - x||\rightarrow 0$ as $n \rightarrow \infty$.

It is clear to me, that the sequence $x_0, x_2, x_4, x_6,\cdots$ is "increasing", the elements are "moving away" from $x_0$ on the line between elements $x_0$ and $x_1$. And, the sequence $x_1, x_3, x_5, x_7,\cdots$ is "decreasing", the elements are "moving away" from $x_1$ towards $x_0$.

Also, I know that the elements of the first subsequence are all "smaller" than the elements in the second subsequence.

So the limit $x$ should be the meeting point of these two subsequences.

PS! I understand, that there is no smaller or larger, because $x_0, x_1$ can be any two elements in X. I just put them on a number line.

Chain rule multivariable calculus.

Posted: 27 Feb 2022 10:13 AM PST

I have problem understanding the chain rule. For example consider a function $w = f(x,y)$ and $y=x^2$. By the chain rule:

\begin{equation} \frac{\partial w}{\partial x} = \frac{\partial w}{\partial x}\frac{\partial x}{\partial x}+\frac{\partial w}{\partial y}\frac{\partial y}{\partial x} = \frac{\partial w}{\partial x}+2x\frac{\partial w}{\partial y} \end{equation}

so $\frac{\partial w}{\partial y} = 0$. What is wrong with this reasoning? Can you give some example to show that this is not true?

State the likelihood function

Posted: 27 Feb 2022 10:10 AM PST

Im learning something about likelihood function and there is a task in my book. But I'm not sure if I did it correctly could you help with some tips?

Following task:

Clients are lining up in a post office. We record the time $t_1, ..., t_N$ in minutes required to serve the $N$ consecutive clients. We distinguish between two types of clients, those that are coming to send a packet, and those that are coming to send a letter (and whose service is typically twice faster). Service times for all clients are independent, and drawn from an exponential distributions with rate dependent on whether the client sends a packet or a letter:

$$p(t_i|\theta) = \theta exp(-\theta t_i) ~ (packet)$$ $$q(t_i|\theta) = 2\theta exp(-2\theta t_i) ~ (letter)$$

and where $\theta$ is a parameter between $0$ and $\infty$ to be learned.

Consider six clients, the first two wanted to send a packet, and stayed at the post office for $2$ and $5$ minutes respectively. The last four clients wanted to send a letter and were served in $1$ minute each.

State the likelihood function measuring the joint probability of observing all these events.

My problem here is, that I have the Data $D$ and the information about six Clients. How do I combine this informations?

My solution(s):

1:

$P(t_1, ..., t_ND|\theta) = \prod_{i=1}^{N} (\theta exp(-\theta t_i))^2 \cdot (2\theta exp(-2\theta t_i))^4$

2: Just consider the six given clients.

$P(2,5,1,1,1,1|\theta) = \theta exp(-2\theta ) \cdot \theta exp(-5\theta) \cdot (2\theta exp(-2\theta))^4 $

magnitude $\cot(z)$ for large $|z|$

Posted: 27 Feb 2022 09:58 AM PST

Context

This question regards the evaluation of sums using periodic functions in a product. I am having difficulty understanding the basis for the same. Here are two interrelated questions that I believe will help my conceptual understanding.

Question

(1) How can I show that $\cot {z}$ has magnitude of order 1 for large $|z|$ when not extremely close to one of its poles?

(2) How can I show that $\cot {z}$ does not affect the limiting behavior of $\oint f(z)\,\cot{(z)}\,dz$?

My attempts at a solution

Part 1.

I start with a product expansion.

\begin{align*} \cot{(\pi\,z)} &= \frac{\cos {\pi\,z}}{\sin \pi\,z} \\ &= \frac { \prod_{n = 1}^\infty\left(1 - \frac{(\pi\,z)^2}{\pi^2\left(n - \frac{1}{2}\right)^2}\right) } { \pi\,z \prod_{n = 1}^\infty\left(1 - \frac{(\pi\,z)^2}{\pi^2 n^2}\right) } \\ &= \frac { \prod_{n = 1}^\infty\left(1 - \frac{z^2}{ \left(n - \frac{1}{2}\right)^2}\right) } { \pi\,z \prod_{n = 1}^\infty\left(1 - \frac{ z ^2}{ n^2}\right) } \\ &= \frac { \prod_{n = 1}^\infty\left( \left[ \left(n - \frac{1}{2}\right)^2 - z^2\right]\,n^2\right) } { \pi\,z \prod_{n = 1}^\infty\left( \left[n^2 - z ^2\right]\left(n - \frac{1}{2}\right)^2 \right) } \\ &= \frac { \prod_{n = 1}^\infty\left( \left[ n^2\,\left(n - \frac{1}{2}\right)^2\right] - n^2\,z^2\right) } { \pi\,z \prod_{n = 1}^\infty\left( \left[n^2\,\left(n - \frac{1}{2}\right)^2 \right]- \left(n - \frac{1}{2}\right)^2 \,z ^2\right) } \end{align*} I argue (this seems like hand waiving to me---maybe because I do not know how to deal with competing infinities) that for any $n$ that I consider, I can write $z$ as, for example $e^{i\phi}\,n^n$, Now, I write $z = R\,e^{-i\phi}$ \begin{align*} &= \frac{1}{ \pi\,z } \prod\limits _{n = 1}^\infty \frac { \left( n^2\,\left[ \left(n - \frac{1}{2}\right)^2 - R^2\,\left[\cos^2{\theta} - \sin^2{\theta} \right]\right] - i\,2\, n^2\,R^2\,\cos\theta\,\sin\theta \right) } { \left(n - \frac{1}{2}\right)^2 \left[n^2 - R^2\,\left[\cos^2{\theta}- \sin^2\theta \right] \right]- i\,2\,\left(n - \frac{1}{2}\right)^2 \, R^2\, \cos\theta\,\sin\theta } \\\\ &= \frac{1}{ \pi\,z } \prod\limits _{n = 1}^\infty \frac{n^2 }{ \left(n - \frac{1}{2}\right)^2} \sqrt { \frac { \left[ \left(n - \frac{1}{2}\right)^2 - R^2\,\left[\cos^2{\theta} - \sin^2{\theta} \right]\right]^2 + \left[2 \,R^2\,\cos\theta\,\sin\theta\right]^2 } { \left[ \left(n \right)^2 - R^2\,\left[\cos^2{\theta} - \sin^2{\theta} \right]\right]^2 + \left[2 \,R^2\,\cos\theta\,\sin\theta\right]^2 } } e^\theta \end{align*}

I think this is handwaiving, but here goes. I think of a particular $n$ for the moment, and pick $R$ to be much much bigger. If you want a larger $n$---say on the order of my $R$,---then I pick one still larger. Then, \begin{align*} &= \frac{1}{ \pi\,z } \prod\limits _{n = 1}^\infty \frac { \left( n^2\,\left[ \left(n - \frac{1}{2}\right)^2 - R^2\,\left[\cos^2{\theta} - \sin^2{\theta} \right]\right] - i\,2\, n^2\,R^2\,\cos\theta\,\sin\theta \right) } { \left(n - \frac{1}{2}\right)^2 \left[n^2 - R^2\,\left[\cos^2{\theta}- \sin^2\theta \right] \right]- i\,2\,\left(n - \frac{1}{2}\right)^2 \, R^2\, \cos\theta\,\sin\theta } \\\\ &= \frac{1}{ \pi\,z } \prod\limits _{n = 1}^\infty \frac{n^2 }{ \left(n - \frac{1}{2}\right)^2}\,R^2 e^\theta \end{align*}

NOT WORKING. TRY SOMETHING ELSE

\begin{align*} \cot \theta = i\, \frac{e^{i\theta} + e^{-i\theta}}{e^{i\theta} - e^{-i\theta}} \end{align*} \begin{align*} \cot z = i\, \frac{e^{iz} + e^{-i\theta}}{e^{iz} - e^{-iz}} \end{align*} \begin{align*} \cot z = i\, \frac{e^{+i\, \sqrt{x^2+y^2} \left[\frac{x+ i\,y}{\sqrt{x^2+y^2}} \right] } + e^{-iz}}{e^{iz} - e^{-iz}} \end{align*} \begin{align*} \cot z = i\, \frac{e^{+ix - y} + e^{-ix +y }}{e^{+ix - y} - e^{-ix +y }} \end{align*} \begin{align*} \left.\cot z\right|_{y=0} = i\, \frac{e^{+ix } + e^{-ix }}{e^{+ix } - e^{-ix }} \end{align*}

NOT WORKING. SEEK HELP

Part 2.

I have no idea, but assume that an application of Jordan's lemma is in order.

The formula that counts the number of averages of $2k$-separated prime pairs in the interval $[n + 2k, (n+1)^2 - 2k]$ has the following form.

Posted: 27 Feb 2022 10:10 AM PST

Let $k \geq 1$ and $n \geq k+1$. Then the formula:

$$ f(k,n) := \sum_{d \mid n\#} (-1)^{\omega(d)}\sum_{c \mid d \\ \gcd(c, 2k) \neq 1} \left(\lfloor \dfrac{(n + 1)^2 - 2k - x_{c,d}}{d} \rfloor + \lfloor \dfrac{x_{c,d} -2k - n}{d} \rfloor \right) $$

where $x_{c,d} = k \pmod c, x_{c,d} = -k \pmod {d/c}$ for each $c\mid d\mid n\#$, counts exactly the number of averages $x$ of $2k$-separated prime pairs $p = x - k, q = x + k$ in the interval $[n + 2k, (n+1)^2 - 2k]$. This means that $k = 1$ corresponds to counting twin primes in the moving, growing interval.

Questions. Can you see a way to relate the count for $2k$ to the count for $2(k + 1)$? If not, then please answer:

What is a property from arithmetic function theory that would allow us to rearrange such sums? Since as you can see it's a double-sum over divisors, it involves the Liouville function $(-1)^{\Omega(d)} = (-1)^{\omega(d)}$ (here equality holds, because of the primorial $n\#$).

Is there a theory about such alternating sums i.e. one that talks about sufficient conditions for the alternating sum to be nonzero? Clearly, once you have a proof that the formula is valid, then we automatically get that $f(k,n) \geq 0$ for all $k\geq 1, n\geq k + 1$.

I have checked the formula is valid (using SymPy) for small cases over $k, n \leq 15$ since larger cases will take exponentially longer to compute.

I don't yet have a full proof of this but of the $k=1$ case, you can find that here: Math overflow post.

Note that in this new post (here), I've changed the interval endpoints to not use prime numbers $p_n$, but instead simply in terms of $n$. An updated proof should go through with no trouble. To prove the $k \geq 2$ cases, we have to somehow work in the $\gcd(c, 2k) \neq 1$ piece. Speaking from experience, that shouldn't be too much more difficult than the proof you see on that MO page.

Is this true for a convex function?

Posted: 27 Feb 2022 10:09 AM PST

Let $f: \mathbb{R}_+ \to \mathbb{R}$ satistfy:

  1. $f'(x)<0$ for all $x \in \mathbb{R}_+$,
  2. $f''(x)>0$ for all $x \in \mathbb{R}_+$,
  3. $f(0)=1$ and $\displaystyle\lim_{x\to +\infty}f(x)=0$.

Prove or disprove that $$x(f'(x))^2 > f(x)f'(x)+xf(x)f''(x).$$

Note that above inequality is satisfied with $f(x)=\dfrac{1}{(x+1)^\alpha}$ for $\alpha \geq 1$. I think the above inequality is true but I do not know to prove it.

Thanks in advance for any comments.

It is possible to find at least one function satisfying both $a)$ and $b)$?

Posted: 27 Feb 2022 10:00 AM PST

Let $a<1$ and let $\varepsilon>0$. I am looking for a function $f\in C^1(\mathbb{R}\setminus\{0\})$ satisfying both $$a)\quad f^{\prime}(x)\ge \frac{1}{x^2} \qquad \mbox{ for } 0<x<\varepsilon;$$ $$b)\quad f(x)\ge \frac{1}{x^a} \qquad \mbox{ for } 0<x<\varepsilon.$$

About me, some examples can be found only when $a<0$. I am not sure, but maybe one example can be constructed in this way: take $a=-1$ and $f(x) = x$. Thus $f(x)$ satisfies $b)$ and $f^{\prime}(x) =1$ satisfies $a)$ for $x>1$,i.e you can take $\varepsilon=2$ and $a)$ is satisfied for $1<x<2$, but not in the range $0<x<1$ (this is the reason why I am not convinced).

Could someone please help me?

Thank you in advance!

Volume of the Boundary of a bounded convex set.

Posted: 27 Feb 2022 10:16 AM PST

Consider a convex set bounded inside the unit cube $K \subset [-1, 1]^n\subset \mathbb{R}^n$. Is there a bound on the volume of its boundary $vol_{n-1}(\partial K)$?

The unit cube gives $vol_{n-1}(\partial [-1,1]^n) = 2^n n$, and It sounds "obvious" that it maximizes this quantity, but I have no idea how to show this (or if this "obvious" claim is actually true).

Any help/references will be appreciated.

(Thanks Chris Sanders for pointing out a mistaake in my original question)

Why does the fact that $f$ and $g$ have the same Fourier coefficients imply that $f=g\text{ a.e.} $

Posted: 27 Feb 2022 10:08 AM PST

I am self-learning Rudin's RCA and I encountered a puzzle that struggled me for a long time. Here is the related text from RCA and my question.

Suppose now that $f\in L^1(T)$, that $\{c_n\}$ is given by $ c_n = \frac{1}{2\pi} \int_{-\pi}^\pi f(x)e^{-inx} \, dx$, and that $\sum_{-\infty}^\infty |c_n|<\infty$. Put $g(x)=\sum_{-\infty}^\infty c_n e^{inx}$.

We can get that the series $g(x)$ converges uniformly by $\sum_{-\infty}^\infty |c_n|<\infty$ (hence $g$ is continuous), and the Fourier coefficients of $g$ are easily computed. Through the computation, we know that $f$ and $g$ have the same Fourier coefficients. This implies $f=g\text{ a.e.}$, so the Fourier series of $f$ converges to $f(x)\text{ a.e.}$

My question is in the very last line: why the fact that the Fourier coefficients of $f$ and $g$ are equal implies that $f=g\text{ a.e.}$?

Guess 1: I am aware that if $f$ and $g$ are continuous, then this implication can follow. So, we may use that $C(T)$ is dense in $L^1(T)$ to try to solve this question. But I am stuck with the $\epsilon$ thing.

Guess 2: Is it true that if a function's Fourier series converges pointwise, then the series must converge (pointwise a.e.) to the original function? If yes, then we can say that the Fourier series of $f$ converges pointwise to $g$, and $g$ is equal to $f\ a.e.$ by the above guess.

Prove that a sum of squares to the $n$th power is also a sum of squares [closed]

Posted: 27 Feb 2022 10:01 AM PST

Let $p$ and $q$ be real numbers, let $n$ be a positive integer, and let $i$ be the imaginary unit. Using the factorisation $p^2+q^2=(p+qi)(p-qi)$, prove for any $p$, $q$ and $n$, that the sum of the squares of $p$ and $q$, all raised to the power $n$, $(p^2+q^2)^n$, is also a sum of squares.

I am doing this as a way to practice relating complex numbers to real numbers. I have tried expanding $(p^2+q^2)^n$ using the binomial theorem to no avail. I have also tried rewriting everything in polar coordinates, bringing me nowhere.

When $2^{2012}$ is multiplied by $5^{2013}$, what is the sum of all the digits in the product? [closed]

Posted: 27 Feb 2022 10:03 AM PST

When $2^{2012}$ is multiplied by $5^{2013}$, what is the sum of all the digits in the product?

I know that if the base is the same, I can add the powers, but the base is not the same, nothing matches an formula. I don't get it, I mean the only way I think I can find the answer is if I find out the value of $2^{2012} \cdot 5^{2013}$. I tried to mannually do the equation and got 5 as the anser, but in a test, I wouldnt have enough time to do the actually thing, so is there an easier way to do it?

On how to find the maximum point of a formula.

Posted: 27 Feb 2022 10:19 AM PST

I need help finding out how to find the maximum of a formula I devised, given the starting conditions.

$$y = \sqrt{{\frac{{mx^2v^2 - 2xF}}{{m}}}}$$

or, adding in the values of F,

$$y = \sqrt\frac{{mx^2v^2 - xpcA(w - xv)^2}}{{m}}$$

No variable can equal 0, except for y. x should be undefined at 0.

When placing this is Desmos, it graphs a function with a maximum, I have found that, it hits 0 at 2F/mv2. But I have yet to be able to find out when it hits the maximum and begins to decrease.

To further make myself clear, x is changing and I need to find a formula which says at what x, given the other variables, would y reach the maximum. Formula 2 is more relevant as it separates F into the different variables.

How exactly would I go about finding this maximum? I know how to do it with simpler functions, but how can I find a formula for the maximum of a more complicated one like this. I need to be able to find both the x and y of the maximum.

I am sorry if this is a lot of work, I am just not sure how to do things like this myself. At least I want to be taught how to be able to find the Maxima of complex functions, so that I can solve the problem myself.

Edit: Cleared up some confusion with variables, and specified my question better.

The Derivative I got

$$y = \frac{(xpcAw^2 - 2x^2 vpcAw + a^3v^2pcA)}{2m^\frac{3}{2}\sqrt{mx^2v^2-xpcA(w-vx)^2)}}$$

setting to 0 gets $x = \frac{w}{v}$ and $x = 0$, which only seem to describe the zeros of the numerator of the derivative, and when placed into the original formula, does not get the correct value of x.

A question on contraction mapping theorem and fixed point iteration

Posted: 27 Feb 2022 10:18 AM PST

First of all, thank you for taking the time to read my post. Secondly, this is a question I got as a part of homework. However, the professor allows us to work in groups so I'm hoping that this is okay. First I will state the question, without actual numbers in the problem. I hope this will not affect the solution a lot. Also, I'm interested in the general methodology than the actual solution.

Question

Let $f(x) = e^{-x}.$

a) Show that the sequence $x_{n+1} = f(x_n)$ converges to a unique fixed point $z\in[a,b]$ for any initial value $x_0\in [a,b].$

b) Also show that the sequence $x_{n+1} = f(x_n)$ converges to the unique fixed point $z\in[a,b]$ for any initial value $x_0\in (-\infty,a)\cup(b,\infty).$

My attempt

So, I did part (a) using the contraction mapping theorem. I first proved that $f$ is a contraction on $[a,b]$ and then I showed that $f$ maps $[a,b]$ into itself. Then by the contraction mapping theorem, (a) follows directly.

For (b) however, the contraction mapping theorem couldn't be used because the interval is not a complete subset of the reals. I tried to show that for any initial point($x_0$), $x_1$ lies in $[a,b]$ so that the sequence would eventually lie within $[a,b]$ and then everything follows from part (a). Unfortunately, that didn't work either. Is there any way I can tackle the problem?

Thank you!

Maximising Winning Probability for an Alternating Series of Tennis Games

Posted: 27 Feb 2022 10:12 AM PST

Suppose that you are to play a series of $n$ games of tennis, from which you are eligible to win a prize if you win any $r < n$ consecutive games in the series. The probability of you winning a match, given that you serve, is $p_1$, and it is $p_2$ if you don't serve. Here, $p_1 > p_2$. (Note that $p_1 + p_2 \neq 1$).If you serve in the first game, you don't serve in the second game, but you serve in third game, and so on. To maximise the probability of you winning the prize, how should you choose to play the series? Should you choose to serve first?

My initial guess is that whenever $n$ is even, your choice does not matter since the sequence of the games is symmetric irrespective of your decision to serve in the first match. For the case of odd $n$, consider when $n=3$ and $r=2$. In this scenario it is beneficial to start the series letting the opponent serve, to ensure that you get to serve in the 2nd match, and it is crucial to win the 2nd match to have a streak of 2 matches in any case. Mathematically,

$$\text{Pr(Win | Serve)} = p_1p_2 + (1-p_1)p_2 p_1 $$ $$ = (2 - p_1)p_1p_2 $$

Since you either win the first two games, or loose the first one and win the other two. Similarly, $$\text{Pr(Win | Don't Serve)} = p_2p_1 + (1-p_2)p_1 p_2 $$ $$ = (2 - p_2)p_1p_2 $$

And it is clear that $2 - p_1 < 2 - p_2$.

However, I am unable to generalise this result for general $n$ and $r$. I tried to extend my reasoning using induction on $n$ but couldn't really get anywhere.

How to discretize or transform an optimal control problem with free final time?

Posted: 27 Feb 2022 09:56 AM PST

Given is the following simple optimal control problem:

$$ \begin{align} &\min_{x, u, t_f} t_f \\ % \text{s.t.} \qquad \dot{x}(t) &= \begin{pmatrix} x_2(t) \\ u(t) \end{pmatrix} \\% x(0) &= (0, 0)^{\top} \\ x(t_f) &= (300, 0)^{\top} \\ x_1(t) &\in [0, 33] \\ x_2(t) &\in [0, 330] \\ u(t) &\in [-2, 1] \end{align} $$

with $x(t) = (x_1(t), x_2(t))^\top$. Here, $t_f$ is the free final time. I'd like to discretize the problem and solve it with a NLP solver. However, how can I handle the $t_f$ variable when discretizing the problem? Is there some transformation technique for the problem?

Velocity vector from inertial to body frame

Posted: 27 Feb 2022 10:02 AM PST

here's my question:

I have position and velocity vectors of a body in the inertial frame. Now i need to switch the reference system to body frame.

So i have

$\bar{x}_b = \hat{R}\bar{x}_I$

where

$\hat{R}=\hat{R}( \phi(t),\theta(t),\psi(t)) = \left[ \begin{matrix}\cos{(\theta)}\cos{(\psi)} & \cos{(\theta)}\sin{(\psi)} & -\sin{(\theta)}\\ \sin{(\phi)}\sin{(\theta)}\cos{(\psi)}-\cos{(\phi)}\sin{(\psi)} & \sin{(\phi)}\sin{(\theta)}\sin{(\psi)}+\cos{(\phi)}\cos{(\psi)} & \sin(\phi)\cos(\theta) \\ \cos{(\phi)}\sin{(\theta)}\cos{(\psi)}+\sin{(\phi)}\cos{(\psi)} & \sin{(\phi)}\sin{(\theta)}\sin{(\psi)}-\sin{(\phi)}\cos{(\psi)} & \cos(\phi)\sin(\theta) \end{matrix} \right]$

so now i shoud do:

$\bar{v}_b = \dot{\bar{x}}_b = \frac{\partial{\hat{R}}}{\partial{t}}\cdot \bar{x}_I +\hat{R} \cdot\frac{\partial{\bar{x}_I}}{\partial{t}} $

is this correct? My doubt is about rotational matrix, should it be derived? Or am i missing something?

Thanks in advance!

No comments:

Post a Comment