Monday, April 18, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


The only continuous functions satisfying $f(x+y)f(x-y)=f(x)^2f(y)^2$ are $f\equiv-1,0,1$ - solution verification

Posted: 18 Apr 2022 10:26 AM PDT

Problem 4539, source - "Les mathematiques par les problemes":

Determine all the continuous functions $f:\Bbb R\to\Bbb R$ such that: $$\tag{$1$}\forall x,y\in\Bbb R:f(x+y)f(x-y)=f(x)^2f(y)^2$$

I propose that there are only three continuous solutions $f_{-1,0,1}$, all constant: $$f_k(x)\equiv\begin{cases}-1&k=-1\\0&k=0\\1&k=1\end{cases}$$

Unfortunately this exercise is in the "without solutions" section, so I'd like some comments on the validity or thoroughness of my approach.

My solution:

Let $f$ be any continuous solution to $(1)$. Of course $f_0$ is an easy continuous solution, so suppose $f$ is not identically $0$; it is easy to see then that $f(0)^2=1$ by letting $y=0$ in $(1)$. As a corollary of this, for $f$ nonzero solutions: $$\begin{align}\tag{3}f(2x)f(0)&=f(x)^2\\\tag{4}f(x)f(0)&=f(x/2)^2\\\tag{5}f(-x)f(x)=f(x)^2,\,f(-x)&=f(x)\quad\text{if }f(x)\neq0\end{align}$$Suppose there exists an $\alpha$ with $f(\alpha)=0$. Then $(4)$ gives that $\alpha/2$ is also a root, and inductive application (of $(3)$ also) gives that $2^{-n}\alpha$ is a root for any integer $n$. The suppose continuity of $f$ gives: $$0=\lim_{n\to\infty}f(2^{-n}\alpha)=f(\lim_{n\to\infty}2^{-n}\alpha)=f(0)$$But $f(0)=0$ gives $f\equiv0$ by setting $y=0$ in $(1)$, a contradiction. So, $f$ not identical to $0$ is true if and only if $f$ has no zeroes, so $(5)$ reveals that all non-zero solutions are even. $(4)$ reveals that $f$ is either strictly positive or strictly negative dependent on the sign of $f(0)$ (as $f(0)^2=1,\,f(0)=\pm1$). Then if $f$ is any strictly negative solution, $g:=-f$ is easily seen to be a strictly positive solution. Without loss of generality then we hunt for a positive solution and recover the negative solutions afterward.

Let then $g\gt0$ be a solution of $(1)$. As $g(0)=1$, we have $g(2x)=g(x)^2$ and $g(-x)=g(x)$ for all $x$. Let $x\in\Bbb R$ be arbitrary and fixed. We have: $$g(4x)g(2x)=g(2x)^2g(2x)=(g(x)^2)^3=g(x)^6$$But also via $(1)$: $$\tag{6}g(x)^6=g(4x)g(2x)=g(3x)^2g(x)^2$$We also get: $$g(3x)g(x)=g(2x)^2g(x)^2=g(x)^6$$So that $g(3x)=g(x)^5$ ($g(x)\neq0$ always). By $(6)$: $$g(x)^6=(g(x)^5)^2g(x)^2\implies1=g(x)^6$$For $x$ held arbitrary, and so $g\equiv 1$ as we took $g$ positive.

Thus far $f_0,f_1$ have been identified as the only nonnegative solutions. As mentioned, $f_{-1}:=-f_1$ is then the unique negative solution (technically they have only been shown to be candidate solutions, but that they are continuous and satisfy $(1)$ is obvious). $\blacksquare$

Certainly the solutions I found are correct solutions, but are they all the continuous solutions?

Lifetime risk and global annual rates

Posted: 18 Apr 2022 10:21 AM PDT

I'm trying to understand the connection between lifetime risk and global annual rates.

As a simple example, imagine that there is a 10% lifetime risk of some genuinely random event (so we can ignore demographic factors and so on). Suppose there is a 10% lifetime "risk" of seeing a UFO.

Based on that, and making a few assumptions about average life expectancy (let's say 80 years, though I know it's closer to 75) and global population (let's say 8 billion, and we can keep it fixed for simplicity), how many annual UFO sightings could we expect?

My simple approach was to say 10% of 8 billion is 800 million events, spread out over 80 years, for 10 million annual events, but this seems way too simple a way to ballpark it.

Any suggestions or thoughts? Does this simple way hold water? (And if I'm in the wrong place for this one, please point me in the right direction.)

Finding the Local Truncation Error for the Explicit Euler Scheme

Posted: 18 Apr 2022 10:15 AM PDT

I need to demonstrate finding the LTE for the Explicit Euler scheme when i) $\mu=\frac{1}{6}$ and ii) $\mu\neq\frac{1}{6}$. I have been looking for a reference text/video and I looked at the lecturers notes but I really was struggling to follow how to even begin. I know I will have to carry out a Taylor expansion but not much beyond that.

The scheme: $\mathrm{U}_{i}^{n+1}=\mathrm{U}_{i}^{n}+\mu(\mathrm{U}_{i+1}^{n}-2\mathrm{U}_{i}^{n}+\mathrm{U}_{i-1}^{n})$

where: $1\le u\le M-1, n\ge 0$ $\mu=\frac{\bigtriangleup t}{(\bigtriangleup x)^2}$, $\mathrm{U}_{i}^{n}: 0\lt i \lt M $, $\mathrm{U}_{0}^{n}: n\ge 0$, $\mathrm{U}_{M}^{n}: n\ge 0$

Order 18 group with $C_3\times C_3$ as a subgroup and an element $t$ of order 2 such that $txt^{-1}=x^{-1}$ for all $x$ has no faithful complex reps.

Posted: 18 Apr 2022 10:20 AM PDT

I had a solution which involves induction from $P=C_3\times C_3$. If $\chi$ is an irred rep of $G$ (our order 18 group), then let $\psi$ be an irred component of the restriction of $\chi$ to $P$. It follows that inducing $\psi$ back up to $G$ gives a rep of $G$ with $\chi$ as a component and having dimension 2 (the index of $P$ in $G$). So every irred rep of $G$ has dimension at most 2. Now given a faithful irred rep of $G$, just choose some simultaneous eigenbasis of $P$. There are 9 possible matrices for an element of $P$. But since this rep is faithful, no non-identity element of $P$ acts as a scalar (then it would commute with $t$). So there are actually only 6 possibilities to distribute among the 8 non-identity elements of $P$, a contradiction.

I believe that there should be a much simpler solution since this appeared on an exam question whose previous parts did not mention induction. We were asked to show that if $Z$ is the center of a group $G$, and we are given an irred rep of $G$, that every element in $Z$ acts by scaling (follows from Schur's Lemma), the dimension is at most $\sqrt{[G:Z]}$ (follows from considering row-orthogonality) and that if the rep is faithful, then $Z$ is cyclic (Schur's).

Then we were asked to prove the above result by "considering the action of $P$ on an irreducible $\mathbb{C}G$-module."

Is there a name to refer to point $A$ / point $B$ when discussing vector $\overrightarrow{AB}$?

Posted: 18 Apr 2022 10:20 AM PDT

In a code, I am writing a concise documentation of a function that needs 2 vectors defining a triangle $ABC$. The function performs an operation which is invariant up to a translation, so two vectors only are necessary, named $\vec{s}$ and $\vec{z}$, and introducing points $A$, $B$ and $C$ is irrelevant and could even be misleading. However, the triangle $ABC$ has to be such that $[AB]$ is part of a polygonal chain (called polyline in computer science context) that is defined elsewhere in the code and $C$ of another. The function has little meaning independently of that setting.

I first document the properties of $\vec{s}$ as being one vector of the first polyline. Then I need to document $\vec{z}$, for that it would be convenient to say that $\vec{z}$ is such that $A+\vec{z}$ is a point of the other polyline.

    s: vector, base of triangle, along polyline Γ_1  

However, not having given $A$ a name, I'm struggling, even when tolerating some abuse of notation:

    z: vector, pointing from [point of origin of] s along Γ_1 and to a point of Γ_2  

Ιs there some elegant way to phrase this? Maybe

    z: vector, pointing from Γ_1 and to Γ_2, with (s,z) an angle.  

...although again, it would properly be $(A,\vec{s},\vec{z})$ an angle.

On the behaviour for the quotient involving Fermat numbers of $\frac{\psi(F_m)}{F_m}$ where $\psi(x)$ denotes the Dedekind psi function

Posted: 18 Apr 2022 10:06 AM PDT

In this post we denote the Dedekind psi function as $\psi(m)$ for integers $m\geq 1$. This is an important arithmetic fuction in several subjects of mathematics. As reference I add the Wikipedia Dedekind psi function. On the other hand I add the reference that Wikipedia has the article Fermat number, $F_l=2^{2^l}+1$ and that I was inspired in the results showed in page 101 of [1].

The Dedekind psi function can be represented for a positive integer $m>1$ as $$\psi(m)=m\prod_{\substack{p\mid m\\p\text{ prime}}}\left(1+\frac{1}{p}\right)$$ with the definition $\psi(1)=1$.

Question. I would like if one can to deduce some claim about $$\frac{\psi(F_m)}{F_m}$$ as $m\to \infty.$ Many thanks.

If the question is in the literature please answer the question as a reference request and I try to search and read the results from the literature. If the question is very difficult I ask about the behaviour or heuristic for the quotient $\frac{\psi(F_m)}{F_m}$ for very large integers $m\geq 1$.

References:

[1] Michal Krizek, Florian Luca, and Lawrence Somer, 17 Lectures on Fermat Numbers, CMS Books in Mathematics, Springer (2001).

Does convergence of conditional variance to 0 imply convergence of unconditional variance to 0?

Posted: 18 Apr 2022 10:19 AM PDT

Let $P := (\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and let $G$ be a sub-$\sigma$-field of $\mathcal{F}$. Suppose $\{X_k\}_{k \in \mathbb{N}}$ is a sequence of square-integrable real-valued random variables, assumed not $G$-measurable, defined on $P$ such that $\operatorname{Var}(X_{k}|G)$ converges in probability to $0$ ($G$ is some sub-$\sigma$-field of $\mathcal{F}$).

Does this imply that $X_k$ converges to $0$ (and what kind of convergence? Almost sure or in probability?). Furthermore, and most importantly for my purposes, does $\operatorname{Var}(X_{k})$ converge to $0$?

Number of coins in a bottle of water

Posted: 18 Apr 2022 10:23 AM PDT

How many coins with a diameter of 28 mm and a height of 3 mm fit in a 20 liter bottle?

Bottle measurements Total height 496 mm Neck height 68 mm Neck diameter 54.5 mm Body diameter 260 mm

Question on cauchy sequence in any metric space.

Posted: 18 Apr 2022 10:26 AM PDT

In a metric space $\mathit(X,d)$,If $<x_n>$ is a Cauchy sequence and $<x_{i_n}>$ its subsequence, show that $\mathit d(x_n,x_{i_n})$ $\to$ 0 as $\mathit n$ $\to$ $\infty$.

My attempt:

Here,$<x_n>$ is a cauchy sequence. Therefore for every $\epsilon$$>0$,there exist $n(\epsilon)$$∈$$\Bbb N$ such that $$d(x_n,x_{i_n})< \epsilon,\forall n,i_n \geq n(\epsilon)$$ Here we can clearly see that $$x_n \to \infty$$ and $$x_{i_n} \to \infty$$as$$n \to \infty$$ Thus $\mathit d(x_n,x_{i_n})$ $\to$ 0 as $\mathit n$ $\to$ $\infty$.

But I am not sure my answer is correct or not..so please point out where I make mistake.Any help is appreciated.

For each of the following, determine whether the statement is true or false.

Posted: 18 Apr 2022 10:24 AM PDT

For each of the following determine whether the statement is true or false. Clearly explain your reasoning and illustrate the functions you create to support your decision.

a. f(x)*g(x)=g(x)*f(x)

b. f(g(x))=g(f(x))

Limit of sequence $U_n=\frac{1}{n+1}+ \frac{1}{n+2}+\cdots+\frac{1}{2n}$

Posted: 18 Apr 2022 10:12 AM PDT

$$U_n = \frac{1}{n+1} + \frac{1}{n+2} + \cdots + \frac{1}{2n}$$ Find $\lim\limits_{n\to \infty} U_n$.


My reasoning: As $n$ approaches infinity, all terms approach $0$, and so the sum approaches $0$.

Roots of a given fourth degree polynomial function

Posted: 18 Apr 2022 10:06 AM PDT

Question

I am interested in the root of the polynomial function :

$x^4+(a+b+c+d-2)x^3+(ab+ac+ad+bc+bd+cd-2c-2b-a-d+1)x^2+(abc+abd+acd+bcd-ab-ac-2bc-bd-cd+b+c)x+(abcd-abc-bcd+bc)$

Under the restriction that: $0<a,b,c,d<1$.

Attempt

I tried this but it went nowhere. Does anyone have a tip for me?

A Chef has 15 cookbooks. In how many ways can he choose 8 books and line them up on a shelf above his kitchen counter?

Posted: 18 Apr 2022 10:25 AM PDT

A Chef has 15 cookbooks. In how many ways can he choose 8 books and line them up on a shelf above his kitchen counter?

I solved it the following way:

Ways of choosing 8 books out of 15 books: 15C8= 15!/(15-8)!(8!) =>15C8=6435 Ways

Then i calculated for the ways to arrange those 8 books in a line: 8! =40320 ways

Lastly, to get the total number of ways: 6435×40320 =>259459200 ways

But i am not sure if i did the correct way

What is the smallest 3D rotation to make the axes line up

Posted: 18 Apr 2022 10:16 AM PDT

A 3x3 rotation matrix is considered axis-aligned if it consists of only 1, -1, and 0. Given an arbitrary rotation matrix, what is the smallest rotation required to make it axis-aligned?

For example, given

$$R=\begin{pmatrix} 0.0281568 & 0.8752862 & 0.4827849\\ 0.9936430 & 0.0281568 & -0.1089990\\ -0.1089990 & 0.4827849 & -0.8689292 \end{pmatrix}$$

The best I can think of is to try to get R closer to identity by permuting and negating the rows, and then compute the angle from identity (by converting to axis-angle form).

So in the example I would multiply R by

$$R'= \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix} * R$$

and R' is about 30 degrees away from identity, so the answer is 30.

My method to compute the permutation is rather adhoc. Is there an better way?

Show that the following subset of $\mathbb{R}^2$ is open

Posted: 18 Apr 2022 10:23 AM PDT

Question

I'm trying to show that the set $A=\{ (x,y) \in \mathbb{R}^2 : |x|+|y|<1\}$ is open in $\mathbb{R}^2$.

Attempt

Here is my try, but I'm not getting exactly what I would want:

Let $Q=(x_0,y_0) \in A$ then $|x_0|+|y_0| <1$, let $r=1-|x_0|-|y_0|>0$, let P=$(x,y) \in B(Q;r)$ then:

  1. $|x|-|x_0| \leqslant |x-x_0| \leqslant ||(x-x_0,y-y_0)|| < r = 1-|x_0|-|y_0| \Rightarrow |x| < 1-|y_0|$
  2. $|y|-|y_0| \leqslant |y-y_0| \leqslant ||(x-x_0,y-y_0)|| < r = 1-|x_0|-|y_0| \Rightarrow |y| < 1-|x_0|$

Adding the last two expressions: $|x|+|y| < 2-|y_0|-|x_0|$

If I get $|x|+|y| < 1-|y_0|-|x_0|$ then the proof is done, but how can I do this?

I'm also aware that the points of $A$ are inside a region bounded by $4$ lines, but I don't know how to approach it analytically to conclude the result.

cauchy sequence and its subsequence in a metric space

Posted: 18 Apr 2022 10:21 AM PDT

In a metric space $\mathit(X,d)$,If $<x_n>$ is a Cauchy sequence and $<x_{i_n}>$ its subsequence, show that $\mathit d(x_n,x_{i_n})$ $\to$ 0 as $\mathit n$ $\to$ $\infty$.

//Note that $<x_n>$ and $<x_{i_n}>$ are may not be convergent.//

My attempt:

Here,$<x_n>$ is a cauchy sequence. Therefore for every $\epsilon$$>0$,there exist $n(\epsilon)$$∈$$\Bbb N$ such that $$d(x_n,x_{i_n})< \epsilon,\forall n,i_n \geq n(\epsilon)$$ Here we can clearly see that $$x_n \to \infty$$ and $$x_{i_n} \to \infty$$as$$n \to \infty$$ Thus $\mathit d(x_n,x_{i_n})$ $\to$ 0 as $\mathit n$ $\to$ $\infty$.

But I am not sure my answer is correct or not..so please point out where I make mistake.Any help is appreciated.

Find number of functions such that $f(f(x)) = f(x)$, where $f: \{1,2,3,4,5\}\to \{1,2,3,4,5\}$ [duplicate]

Posted: 18 Apr 2022 10:15 AM PDT

Consider set $A=\{1,2,3,4,5\}$, and functions $f:A\to A.$ Find the number of functions such that $f\big(f(x)\big) = f(x)$ holds.

I tried using recursion but could not form a recursion relation. I tried counting the cases manually, but it was too exhaustive and impractical. I tried drawing/mapping but that too included counting by making cases. Random variable was another approach I tried but couldn't make a summation.

A general idea for such problems is needed.

Thanks.

Polynomial $x^2-x-1$ exactly divides Polynomial $a_1x^{17}+a_2x^{16}+1$. Calculate $a_1*a_2$

Posted: 18 Apr 2022 10:23 AM PDT

Polynomial $x^2-x-1$ exactly divides Polynomial $a_1x^{17}+a_2x^{16}+1$. Calculate $a_1*a_2$

My initial thought was to substitute root from the quadratic equation into the bigger polynomial but seems like the entire degree(2) polynomial divides the bigger polynomial not its factors.

A basic question about finite semigroup

Posted: 18 Apr 2022 10:12 AM PDT

Let $S$ be a finite semigroup. Recall that every element $a\in S$ determines a unique pair of positive integers $\iota=\mathrm{ind}(a)$ and $\rho=\mathrm{per}(a)$, called the index of $a$ and the period of $a$, respectively. These are the smallest positive integers such that $a^{\iota}=a^{\iota+\rho}$.

In addition, given an element $a\in S$, it can be shown that the Kernel of $a$, namely the set $$ K_a=\{a^{\iota},a^{\iota+1},\ldots,a^{\iota+\rho-1}\} $$ forms a cyclic group.

My question: What are $\mathrm{ind}(x)$ and $\mathrm{per}(x)$ for $x\in K_a$? Can it be expressed in terms of $\mathrm{ind}(a)$ and $\mathrm{per}(a)$?

Question on Chernoff bound type probability argument.

Posted: 18 Apr 2022 10:13 AM PDT

The following result was given in the research article (claim 6) and no justification was provided for the proof. I have presented the claim in simple terms below. Basically, I want to understand what theorems were used and how to prove the following two (To prove) parts.


We have two devices $\mathscr{A}$ and $\mathscr{B}$ far apart. $\mathscr{A}$ takes input $x_i \in \{0,1,2\}$ and $\mathscr{B}$ takes input $y_i \in \{0,1\}$. They both give random outputs $a_i, b_i \in \{0,1\}$ respectively for $i$th input.

Uniformly $m$ random input pairs are selected $$I=\{ (x_i,y_i) \mid x_i \in \{0,1,2\}, y_i \in \{0,1\} \text{ for } 1\leq i \leq m\}$$ and outputs are collected in the set $\mathbb{O}$.

Let $C$ denotes the set of inputs such that $(x_i,y_i) = (2,1)$ i.e., $C=\{ (x_i,y_i) \mid x_i = 2, y_i = 1 \}$.
At random for $0 < \gamma <1$, a total of $\gamma m$ $\in \mathbb{N}$ inputs are selected from $I$. Lets denote this set as $B$.

To Prove 1: The randomly chosen set $B$ contains a fraction of at least $\gamma /2$ fractions from the elements of the set $C$, except with probability at most $e^{- \gamma |C| /8}$.

Let $C_B$ denotes the elements of $C$ in the set $B$. If for $C_B$ the outputs satisfies $a_i \neq b_i$ for at most $\eta$ fractions of times. Then

To Prove 2: With probability at least $1-e^{- \gamma |C| /200}$ the total fraction of rounds in $C$ such that $a_i \neq b_i$ is at most $1.1 \eta$.

Forming the general form of parabola

Posted: 18 Apr 2022 10:15 AM PDT

Suppose the axis of parabola is given by $\alpha x+\beta y+\lambda=0$, and tangent at vertex is $\beta x-\alpha y+\mu=0$ , now by property of parabola the distance from directrix and from focus are equal , lets assume focus is at a distance "$a$" from the vertex tangent . Now we have that $\frac{\beta x-\alpha y+\mu+a \sqrt{\alpha^{2}+\beta^{2}}}{\left(\sqrt{\alpha^{2}+\beta^{2}}\right)^{2}}=\sqrt{(x-h)^{2}+(y-k)^{2}}$ where $(h,k)$ is focus location , how do we eliminate h,k as such to get the general form of parabola from here ? As such only equation relating $(h,k)$ is the axis line which is containing that point . Form should look like this : $\left(\frac{\alpha x+\beta y+\lambda}{\sqrt{\alpha^{2}+\beta^{2}}}\right)^{2}= \frac{1}{√(\alpha^2 + \beta^2)}\left(\frac{\beta x-\alpha y+\mu}{\sqrt{\left(\alpha^{2}+\beta^{2}\right)}}\right)$

Does a codimension 1 subspace of a representation of lie group intersect all orbits?

Posted: 18 Apr 2022 10:25 AM PDT

Let $G$ be a nice lie groups, and $V$ a complex irred representation.

I am interested in understanding for which codim 1 subspaces $U \subset V$, we have $G \cdot U = V$ (does there exist such $U$?). Let me emphasize that in $G U = V$ on the left I mean pointwise, not the span (which correlates with the title of the question by moving $G$ to the right)

Specifically in my case, I care about $SO(3)$ and its irreducible representations.

Calculate Projectile flight time based on Gravity

Posted: 18 Apr 2022 10:26 AM PDT

Hey I want to calculate the flight time of a Projectile in 3D Space based on Bullet's speed, Velocity, Acceleration and Gravity or a custom downward force.

I already have a formula that calculates the time it takes a bullet to intercept another moving target depending on the bullet's position, the target's position, the bullet's speed, the target velocity, and the target acceleration which looks like this:

a = Acceleration  v_T = Target Velocity  p_T = Bullet Impact Position - Bullet Start Position  s = Bullet Speed      t^4 * (a·a/4)  + t^3 * (a·v_T)  + t^2 * (a·p_T + v_T·v_T - s^2)  + t   * (2.0 * v_T·p_T)  +       p_T·p_T  = 0  

However, this would give me the time it takes for the Bullet to reach the Target, without the impact of Gravity or any downward force, assuming the Bullet travels in a straight path.

But I want to calculate the time it takes the bullet to reach the target using gravity which lets the Bullet travel in a curve (you shoot above the target to compensate the gravity).

Since my math skills are unfortunately not the best, I hope that someone can help me here. What would the formula look like to be able to calculate the Travel Time if the factors were the following:

  • Bullet's speed: 150 M/s
  • Gravity: 10 M/$\text{s}^2$
  • Bullet Start Position: Vector3D$\space$ (0,0,0)
  • Targets Position: Vector3D$\space$ (200,0,0)

Target acceleration and target velocity do not matter in this case.

Definition of certain Gateaux differentials of the norm in E. R. Lorch: "A Curvature Study of Convex Bodies in Banach Spaces"

Posted: 18 Apr 2022 10:15 AM PDT

In the paper E. R. Lorch: A Curvature Study of Convex Bodies in Banach Spaces from 1953, the following assumptions and definitions are stated in section II (p. 107-108):

Let $(B, \| \cdot \|)$ be a real Banach space and $B^*$ its dual space. Let $r > 1$ and $G(x) := \frac{1}{r} \| x \|^r$ and $\phi(\alpha, \beta) := G(x + \alpha y + \beta z)$, where $x, y, z \in B$ are fixed and $\alpha, \beta \in \mathbb{R}$. Assuming that $\phi$ is twice continuously differentiable, the derivative of $\phi$ with respect to $\alpha$ at $\alpha = \beta = 0$ will be denoted the $G_y(x)$. [...] It is clear that $G_x(x) = r G(x)$. [...] The second derivatives of $\phi$ at $\alpha = \beta = 0$ will be denoted by $G_{y, y}(x)$, $G_{y, z}(x)$ and $G_{z, z}(x)$.

I am trying to understand the functions $G_y$, $G_{y, y}$, $G_{y, z}$ and $G_{z, z}$. Firstly, we have $G \colon B \to \mathbb{R}$ and $\phi \colon \mathbb{R} \times \mathbb{R} \to \mathbb{R}$.

I would write $$ G_y(x) = \frac{\partial}{\partial \alpha}\bigg|_{\alpha = \beta = 0} \phi(\alpha, \beta) = \lim_{\gamma \to 0} \frac{\phi(\gamma, 0) - \phi(0, 0)}{\gamma} = \lim_{\gamma \to 0} \frac{G(x + \gamma y) - G(x)}{\gamma} = \text{d}G(x; y), $$ where the last expression is the Gateaux derivative of $G$ at $x$ in direction $y$.

How do the corresponding expressions for $G_{y, y}(x)$, $G_{y, z}(x)$ and $G_{z, z}(x)$ look like? In particular, are the second derivatives both taken with respect to $\alpha$ or are they mixed?

Wikipedia gives the following as one definition of the second order Gateaux derivative of $G$: $$ D^2 G(x)\{y ,z \} := \lim_{\gamma \to 0} \frac{\text{d}G(x + \gamma y; z) - \text{d}G(x; z)}{\gamma} = \frac{\partial^2}{\partial \alpha \partial \beta} \bigg|_{\alpha = \beta= 0} \phi(\alpha, \beta). $$ Is this expression $G_{y, z}(x)$?

If so, we would presumably have $$ G_{y, y}(x) = \frac{\partial^2}{\partial \alpha^2} \phi(\alpha, \beta) \bigg|_{\alpha = \beta = 0} = \frac{\partial^2}{\partial \alpha^2} G(x + \alpha y + \beta z) \bigg|_{\alpha = \beta = 0} $$ and analogously $$ G_{z, z}(x) = \frac{\partial^2}{\partial \beta^2} \phi(\alpha, \beta) \bigg|_{\alpha = \beta = 0} = \frac{\partial^2}{\partial \beta^2} G(x + \alpha y + \beta z) \bigg|_{\alpha = \beta = 0}. $$

According to Theorem 2 in that paper, we have $G_{y, z}(x) \in B^*$ for $y \in B$ and $G_y(x) \in B^*$ for $x \in B$. Do my above guesses about $G_{y, y}$, ... fulfill those requirements?

Discrete Dirichlet problem has a unique solution

Posted: 18 Apr 2022 10:07 AM PDT

I am studying Artin's Algebra. In Chapter 1 Exercise M11, he asks to show that every discrete Dirichlet problem on a finite discrete set in plane has a unique solution. This is essentially the question:

Let $R$ be a finite subset of $\mathbb{ Z\times Z}$. Let $\partial R$ be the set of all the points, not in $R$, which are unit distance away from some point in $R$. Let $\beta$ be a function on $\partial R$. Then we need to show that there exists a unique function $f$ on $R\cup \partial R$ such that $f(u, v) = \beta_{u, v}$ for all $(u, v)\in\partial R$ and that $f$ satisfies the discrete Laplace equation$^1$ for all points in $R$.

The full question can be found here.

My attempt at solving this:

After arbitrarily ordering $R$, we get a linear system $LX = B$ to solve for $X$ where $B$ contains combinations of $\beta_{u, v}$'s. We just need to show that any such $L$ is invertible, or equivalently, has a nonzero determinant.

I abstracted out the following properties that any coefficient matrix $L$ must satisfy (possibly after a rearrangement of equations, provided that we write the equations as mentioned in the footnote):

  1. The diagonal entries are all $4$.
  2. Nondiagonal entries can be either $0$ or $-1$.
  3. Any row can have at most four $-1$ entries (since any point in $R$ can have at most four points in $\partial R$ that are a unit distance away from it).
  4. Any column in $L$ can also have a maximum of four $-1$ entries (since any point in $R$ can be a unit distance away from at most four points in $R$).

Now I consider an arbitrary matrix $A$ satisfying the above properties. I rearrange the columns and rows (by elementary row and column operations) to get the following matrix: $$ \begin{bmatrix} B & \ast\\ \ast & B' \end{bmatrix}. $$ Here $B$ and $B'$ are matrices satisfying the above properties, with the additional condition on $B'$: its first column is entirely nonzero, and contains all the $-1$'s of the first column of the original $A$. Thus $b'$ has size at most $5\times 5$ and at least $1\times 1$. By induction, we can row reduce $B'$ to $I$, and thus we can row reduce $A$ to $$ \begin{bmatrix} B' & 0\\ \ast & I \end{bmatrix}. $$

Now all I need to show is that $B'$ is also invertible. The brute force method contains $2^{20}$ cases with the general matrix to be analyzed for invertibility being $$ \begin{bmatrix} 4 & &\ast\\ &\ddots&\\ \ast&&4 \end{bmatrix}_{5\times 5} $$ with the nondiagonal entries being $-1$ or $0$. (This case will also suffice for the cases when $B$ has a smaller size.)

But I am unable to prove this base case without brute force.

Can you help?


$^1$$f$ satisfies the Laplace equation at $(u, v)\in R$ iff $4f(u, v) = f(u+1, v) + f(u-1, v) + f(u, v+1) + f(u, v-1)$.

Convergence of the sequence $x_n= \frac{1}{n+1}+\frac{1}{n+2}+\dots+\frac{1}{2n}$

Posted: 18 Apr 2022 10:18 AM PDT

$\left\{x_n\right\}$ is a convergent sequence where $x_n= \frac{1}{n+1}+\frac{1}{n+2}+\dots+\frac{1}{2n}$. What is $\lim\limits_{n\to\infty}x_n?$


Here are my two approaches:

Using Euler's constant($\gamma$):

$$\begin{align}\lim_{n\rightarrow\infty}x_n &=\lim_{n\rightarrow\infty}(\frac{1}{n+1}+\frac{1}{n+2}+\dots+\frac{1}{2n})\\ &=\lim_{n\rightarrow\infty}[(1+\frac{1}{2}+\dots+\frac{1}{2n})-(1+\frac{1}{2}+...+\frac{1}{n})]\\ &=\lim_{n\rightarrow\infty}[(\gamma_{2n}+\log 2n)-(\gamma_n+\log n)]\\ &=\lim_{n\rightarrow\infty}(\gamma_{2n}-\gamma_n+\log 2)\\ &=\log(2) \end{align}$$ because $\lim_{n\rightarrow\infty}\gamma_{2n}=\gamma$ and $\lim_{n\rightarrow\infty}\gamma_n=\gamma$.

2nd method:

$$\begin{align}x_n&=\frac{1}{n+1}+\frac{1}{n+2}+...+\frac{1}{2n}\\ &=\frac{1}{n}(\frac{n}{n+1}+\frac{n}{n+2}+...+\frac{n}{2n})\\ &=\frac{1}{n}\sum_{i=1}^{n}\frac{n}{n+i} \end{align}$$

Now, as $n\rightarrow\infty$, each $\frac{n}{n+i}\rightarrow1$, hence $\sum_{i=1}^{n}\frac{n}{n+i}\rightarrow n$. Hence $\lim_{n\rightarrow\infty}x_n=1$.

Thus, from the two methods, I end up having two different answers. I suspect that the second method is wrong, but I cannot identify where the mistake is. And I also want to know a method to solve the sum without using Euler's constant. So if anyone can point out the mistake in the 2nd method, and modify it, then it will be of great help to me.

Thanks in advance.

How to prove the given function is not differentiable analytically?

Posted: 18 Apr 2022 10:05 AM PDT

Well the question presented to me is this. The given function is,
$$f\left( x \right) = \left\{ {\begin{array}{*{20}{c}}{\frac{1}{2}x + 2,\,\;\;\;x < 2}\\{\sqrt {2x} ,\;\;\;\;\;\;x \ge 2}\end{array}} \right. $$ Now have to check whether the given function is differenciable at $x=2$ ?


My approach:
For this function to be differentiable, the left hand derivative and the right hand derivative must exits, and both be equal.
Left hand derivative:
$$\begin{array}{c}\mathop {\lim }\limits_{x \to {2^ - }} \frac{{f\left( x \right) - f(2)}}{{x - 2}} = \mathop {\lim }\limits_{x \to {2^ - }} \frac{{\left( {\frac{1}{2}x + 2} \right) - \left( {\frac{1}{2}\left( 2 \right) + 2} \right)}}{{x - 2}}\\ = \mathop {\lim }\limits_{x \to {2^ - }} \frac{{\left( {\frac{1}{2}x + 2} \right) - 3}}{{x - 2}}\\ = \mathop {\lim }\limits_{x \to {2^ - }} \frac{{\frac{1}{2}x - 1}}{{x - 2}}\\ = \mathop {\lim }\limits_{x \to {2^ - }} \frac{{x - 2}}{{2\left( {x - 2} \right)}}\end{array} $$
This gives,
$$\begin{array}{c}\mathop {\lim }\limits_{x \to {2^ - }} \frac{{f\left( x \right) - f(2)}}{{x - 2}} = \mathop {\lim }\limits_{x \to {2^ - }} \frac{{}}{{2}}\\ = \frac{1}{2}\end{array} $$


Right hand derivative:
$$\begin{array}{c}\mathop {\lim }\limits_{x \to {2^ + }} \frac{{f\left( x \right) - f(2)}}{{x - 2}} = \mathop {\lim }\limits_{x \to {2^ + }} \frac{{\left( {\sqrt {2x} } \right) - \left( {\sqrt {2\left( 2 \right)} } \right)}}{{x - 2}}\\ = \mathop {\lim }\limits_{x \to {2^ + }} \frac{{\sqrt {2x} - \sqrt 4 }}{{x - 2}}\\ = \mathop {\lim }\limits_{x \to {2^ + }} \frac{{\sqrt {2x} - 2}}{{x - 2}}\\ = \mathop {\lim }\limits_{x \to {2^ + }} \frac{{\sqrt {2x} - 2}}{{x - 2}} \times \frac{{\sqrt {2x} + 2}}{{\sqrt {2x} + 2}}\end{array} $$
This gives,
$$\begin{array}{c}\mathop {\lim }\limits_{x \to {2^ + }} \frac{{f\left( x \right) - f(2)}}{{x - 2}} = \mathop {\lim }\limits_{x \to {2^ + }} \frac{{2x - 4}}{{\left( {x - 2} \right)\left( {\sqrt {2x} + 2} \right)}}\\ = \mathop {\lim }\limits_{x \to {2^ + }} \frac{2}{{\left( {\sqrt {2x} + 2} \right)}}\\ = \frac{2}{{2 + 2}}\\ = \frac{1}{2}\end{array} $$


Problem is right hand derivative and left hand derivative are coming same. Fooling me that its differentiable at $x=2$.


The graph of the function is,
$f(x)$ Graph
The function is discontinues at $x=2$. So, its not differentiable at $x=2$.


Where have I gone wrong ? How do I prove analytically without taking help of graph that the function is not differentiable at $x=2$?

Where does the sum of $\sin(n)$ formula come from?

Posted: 18 Apr 2022 10:22 AM PDT

I have seen Lagrange's formula for the sum of $\sin(n)$ from $1$ to $n$ during one of my classes last week, but I never saw how it came to be. I tried googling it to find a proof but couldn't seem to find any as it kept bringing up his other work instead and just statements of the formula rather than derivations/proofs.

I'm really interested to see where it comes from so if anyone has any nice proofs that would be appreciated.

Thank you!

For $\sum = \{ 0,1 \}$, $A$ has strings which contain a $1$ in their middle third, and a $B$ which contain two $1$'s in their middle third.

Posted: 18 Apr 2022 10:23 AM PDT

Language $A$ can also be represented as, $$A = \{ uvw \mid u,w \in \sum^*\text{ and, }v \in \sum^* 1 \sum^*\text{ and, }|u| = |w| \ge |v| \}$$

Language $B$ can also be represented as, $$B = \{ uvw \mid u,w \in \sum^*\text{ and, }v \in \sum^* 1 \sum^* 1 \sum^*\text{ and, }|u| = |w| \ge |v| \}$$

I have to prove that $A$ is CFL & $B$ is not a CFL.

To prove $A$ is CFL, I have to show that a CFG can be made. I made a CFG for this equation:

$$L = \{ uv \mid u \in \sum^*\text{ and, }v \in \sum^* 1 \sum^*\text{ and, }|u| \ge |v| \}$$ which is, $$ S \to XSX \mid T1 $$ $$ T \to XT \mid X $$ $$ X \to 0 \mid 1 $$

But I am not able to make a perfect CFG for $A$ after much trying although they seem both same ..

Can I prove $B$ is not a CFL maybe by using pumping lemma ?

Riemannian 2-manifolds not realized by surfaces in $\mathbb{R}^3$?

Posted: 18 Apr 2022 10:23 AM PDT

A smooth surface $S$ embedded in $\mathbb{R}^3$ whose metric is inherited from $\mathbb{R}^3$ (i.e., distance measured by shortest paths on $S$) is a Riemannian $2$-manifold: differentiable because smooth and with the metric just described. Two questions:

  1. Are such surfaces a subset of all Riemannian $2$-manifolds? Are there Riemannian 2-manifolds that are not "realized" by any surface embedded in $\mathbb{R}^3$? I assume _Yes_.
  2. If so, is there any characterization of which Riemannian 2-manifolds are realized by such surfaces? In the absence of a characterization, examples would help.

Thanks!

Edit. In light of the useful responses, a sharpening of my question occurs to me: 3. Is the only impediment embedding vs. immersion? Is every Riemannian 2-manifold realized by a surface immersed in $\mathbb{R}^3$?

No comments:

Post a Comment