Sunday, August 15, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Number of $n$ digit numbers that doesn't include 5 as a digit

Posted: 15 Aug 2021 08:58 PM PDT

Given $n > 0$, where $n$ is the number of digits, how many $n$ digit numbers are there that doesn't include the number $5$?

I think this is just a combinatorics problem.

Suppose $x_1,x_2,x_3,\ldots,x_n$ represents the first, second, 3rd, ..., n-th digit. (By first digit I mean the most significant, so n-th digit is the ones spot). $x_1$ has 8 choices $x_1 \in \{1,2,\ldots,4, 6, \ldots, 9\}$. $x_2, x_3, \ldots, x_n$ all have $9$ choices.

So the answer appears to be $8 * 9^{n - 1}$? Is this correct, or am I missing something?

How can we be sure that the non-trivial kernel contains only non-negative integers?

Posted: 15 Aug 2021 08:55 PM PDT

Let $G$ be a group and consider the additive group of integers $\mathbf Z$. For any fixed $s\in G$, show that the function $$\begin{align}\phi:\mathbf Z&\to G\\n&\mapsto s^n\end{align}$$ is a homomorphism. Deduce from this that if $G$ is finite, then $\text{Ker}(\phi)$ is nontrivial and therefore there exists a positive integer $m$ such that $s^m=e$. (Hint: Can a map from an infinite set to a finite set be injective? Consider the answer to this question in light of Corolloary 2-8.)

For any $n,m\in\mathbf Z,s\in G$, we have $\phi(n+m)=s^{n+m}$ and $\phi(n)\phi(m)=s^ns^m=s^{n+m}$. If $G$ is finite, $\phi$ cannot be injective because $\mathbf Z$ is infinite. Therefore, by Corollary 2-8, the kernel of $\phi$ is non-trivial. (Corollary 2-8 states that a homomorphism of groups is injective if and only if it has trivial kernel.)

However, it implies only that there exists a non-zero integer $m$ such that $s^m=e$, where $e$ is the identity in $G$. How can we be sure that $m$ is positive?

Def of Homology from ncatlab

Posted: 15 Aug 2021 08:52 PM PDT

I have a question about the def of homology of chain complexes from ncatlab. So the def provided there is below enter image description here

I don't know how we have $\text{coker}(V_{n+1} \to \ker(\delta_{n-1})) \cong \ker(\text{coker}(\delta_n) \to \text{im}(\delta_{n-1})) $ from the diagram.
I do see that, if we let $f:V_{n+1} \to \ker(\delta_{n-1}), \phi:\ker(\delta_{n-1}) \to \text{coker}(\delta_n)$ and $\pi:\ker(\delta_{n-1}) \to \text{coker}(f)$ be the cokernel of $f$, then we have a unique arrow $h:\text{coker}(f) \to \text{coker}(\delta_n)$ and I can show that $ (\text{im}(\delta_{n-1}) \xleftarrow[]{} \text{coker}(\delta_{n})) \circ h = 0$ so $h$ factors through a unique arrow from $\text{coker}(f) \to \ker(\text{coker}(\delta_{n}) \to \text{im}(\delta_{n-1}) )$. If I can find an arrow from $\ker(\text{coker}(\delta_{n}) \to \text{im}(\delta_{n-1}) ) \to \text{coker}(f)$, then I am done but I'm not sure how to find that arrow.
Thank you.

Urgent question please!!!

Posted: 15 Aug 2021 08:57 PM PDT

g(x) = 2f(x) - 1 where g:R -2 ->R - 1 then g(x) in terms of x? The function g above is one-one many-one into or none? the inverse of function f(x) is

Prove that the following series is divergent.

Posted: 15 Aug 2021 08:52 PM PDT

Prove that series $\displaystyle \sum_{n=1}^{\infty} \frac{1}{2}i^{n}$ is divergent.

Effective divisor on a curve

Posted: 15 Aug 2021 08:38 PM PDT

I was reading the online courseware by MIT on Arithmetic Geometry and I came across this paragraph:

Let $C/k$ be a (smooth, projective, geometrically irreducible) curve of genus $1$ over a perfect field $k$. Let $n$ be the least positive integer for which Div$_k C$ contains an effective divisor $D$ of degree $n$ (such divisors exist; take the pole divisor of any non-constant function in $k(C)$, for example). If $C$ has a $k$-rational point, then $n = 1$ and $C$ is an elliptic curve.

I do not understand why does such an $n$ exist when Div$_kC$ is the free abelian group generated by the points of $C$, and thus we can obtain any effective divisor of any degree. Furthermore, what does the parenthesized part mean? And the final sentence seems to suggest that if $C$ does not have a $k$-rational point, then $n$ must be greater than $1$.

'Almost Lipschitz' map without fixed points

Posted: 15 Aug 2021 08:36 PM PDT

Every Lipschitz function $f: \mathbb{R} \to \mathbb{R}$ with constant $K \in (0, 1)$ has a fixed point. I am interested does there exist a function $h: \mathbb{R} \to \mathbb{R}$ which satisfies the following condition $$ |h(x) - h(y)| < |x - y|, \\ x, y\in \mathbb{R}, \;\;\;\; x \neq y $$ and has no fixed points?

If $\tan(\pi/12 -x),\tan(\pi/12), \tan(\pi/12 + x)$, are 3 consecutive terms of a GP then sum of the solutions in $[0, 314]$ is $k\pi$. What is $k$?

Posted: 15 Aug 2021 08:41 PM PDT

if $\tan(\frac{\pi}{12} -x),\tan(\frac{\pi}{12}), \tan(\frac{\pi}{12} + x)$, in order are all three consecutive terms of a GP then sum of all the solutions in $[0, 314]$ is $k\pi$. find the value of $k$

My Approach: I applied the $b^2=ac$ formula and after basic trigonometric manipulations, I got the equation, $\frac{\cos(\frac{\pi}{6})}{\cos(2x)}=\cot^2(\frac{\pi}{12})$ and this is where

Real numbers double plus sign superscript

Posted: 15 Aug 2021 08:26 PM PDT

Started reading Diffusions, Markov Processes, and Martingales: Volume 1, Foundations (Cambridge Mathematical Library) and at the beginning of the section titled "Some Frequently Used Notation" I see there is a set defined to be equal to the interval from 0 to infinity, exclusive, shown as the real numbers symbol with a double plus superscript:

From context, real numbers single plus superscript is non-negative real numbers, and real numbers double plus superscript is positive real numbers. However, I'd never seen this notation before and was wondering if this is (a) standard notation, and (b) if the above is how it is actually to be interpreted.

Thank you!

Complex roots of cubic equation

Posted: 15 Aug 2021 08:20 PM PDT

I have a problem to find the second root of the equation $$x^3+px+qx+r,$$ given that $p, q$ and $r$ are real, that $2p+q+2r=0,$ and that one of its roots is $1+i$.

I've tried solving the equation by making it sum of cubic and quadratic eq, but now I'm stuck.

$\sup_f \int_0^1 f(t^a)dt$ where $f$ is continuous, $\int_0^1 |f(t)|^adt\leq 1$

Posted: 15 Aug 2021 08:09 PM PDT

$?=\sup_f \int_0^1 f(t^a)dt$ where $f$ is continuous, $\int_0^1 |f(t)|^adt\leq 1$, where $a>0$.

It is natural to do as follows. $\int_0^1 f(t^a)dt=\int_0^1 f(s)\frac{1}{a}s^{ 1/a-1}ds\leq 1/a (\int_0^1 |f(s)|^ads)^{1/a}(\int_0^1 s^{-1}ds)^{(a-1)/a}$ where Young inequality is used. But it is $\infty$!

how do i solve this without using l'hopital rule? >$\lim\limits_{x\to\infty}\left(4^x+3^x\right)^{\frac1x}$.

Posted: 15 Aug 2021 08:17 PM PDT

$\lim\limits_{x\to\infty}\left(4^x+3^x\right)^{\frac1x}$.

Help! I don't know how to solve this problem. The teacher said we can't use L'hopital rule.

A bag contains 15 marbles 10 are yellow and 5 are black. 4 marbles are selected from the bag.

Posted: 15 Aug 2021 08:06 PM PDT

A bag contains $15$ marbles of which $10$ are yellow and $5$ are black. $4$ marbles are selected from the bag.

  1. How many different samples are possible?

I got $1365$ from this expression: $\cfrac {15!} {11! \cdot 4!}$

  1. How many samples consist entirely of yellow marbles?

I'm thinking $10!$

  1. How many samples have $2$ yellow and $2$ black marbles?

I got $450$ by $$\binom{10}{2}\binom{5}{2}=45 \cdot10 =450 $$

  1. How many samples have exactly $3$ yellow marbles?

I got $1200$ by $$\binom{10}{3}\binom{5}{4-3}=120 \cdot10 =1200 $$

Number of Quadratic Bezier Curve-Ray Intersections

Posted: 15 Aug 2021 07:57 PM PDT

Given some quadratic bezier curve $B(t)$ and some ray $R$ is there an equation to calculate the number of intersections between the two. (For my application I only need to consider 2d space).

Change of variable in non-linear ODE

Posted: 15 Aug 2021 07:56 PM PDT

I have the following ODE I wish to study:

\begin{equation} sp(x)-s(s+1)p(x)^2+s(s+1)\frac{ \mathrm{d} }{ \mathrm{d} x}\left((1+p(x))\frac{ \mathrm{d} p}{ \mathrm{d} x}\right)=0 \end{equation} Where $s\in \mathbb{R}$. When $s=0$ we recover the time independent diffusion equation.

Does this equation have a name? I tried to induce a change of variable $q(x)=F(p(x))$ to get rid of $p$ and $p^2$ but I do not know how to proceed.

How would I simplify this equation in order to see if it reduces to a famous case?

Fast computation of real eigenvalues of special 4x4 matrix

Posted: 15 Aug 2021 08:38 PM PDT

For distance calculations between ellipses$^1$ I have to find the real eigenvalues of a lot of $4\times 4$ matrices that have the shape

$$\left[ \begin{matrix} -2a & b & 2a & a^2 \\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\0&0&1&0 \end{matrix}\right]\tag{1}$$ with $a,b\in \mathbb{R}$ and characteristic polynomial $$\lambda^4+2a\lambda^3-b\lambda^2-2a\lambda-a^2=0.\tag{2}$$ One can assume that $a$ and $b$ are related in such a way that there are always $2$ complex conjugated eigenvalues and $2$ real eigenvalues $\pm m$. Only the absolute value of $m$ is of interest.

There is some redundancy as the complex solutions are not needed and the real solutions differ only by the sign.

Does this redundancy allow for faster solving of the polynomial to get |m|?

$^1$ Ik-Sung Kim: An algorithm for finding the distance between two ellipses, Commun. Korean Math. Soc. 21, 559 (2006).

Literature on "sink/source partition" of the incidence matrix of an oriented graph

Posted: 15 Aug 2021 08:14 PM PDT

I'm looking for any books/papers that describe some properties of the node-by-edge incidence matrix, $B$, of an oriented graph $G = (V,E)$ separated into the "source only" part $B_+$ and "sink only" part $B_-$, such that both $B_+$ and $B_-$ only contain the $0,1$ entries and $B = B_+ - B_-$.

An example of this separation is the following graph (the edges are oriented counter-clockwise, so 1->2->3->1, but this is just an arbitrary choice):

          (1)            / \           /   \         (2)---(3)  

We would then have $B = \begin{bmatrix} 1 & 0 & -1 \\ -1 & 1 & 0 \\ 0 & -1 & 1 \end{bmatrix}$, which can be separated into $B_+ = \begin{bmatrix} 1 & 0 & \color{red}{0} \\ \color{red}{0} & 1 & 0 \\ 0 & \color{red}{0} & 1 \end{bmatrix}$ and $B_- = \begin{bmatrix} \color{red}{0} & 0 & 1 \\ 1 & \color{red}{0} & 0 \\ 0 & 1 & \color{red}{0} \end{bmatrix}$.

A very simple property is that if we remove the edge orientations and look at the underlying undirected graph, then its incidence matrix is $\vert B \vert = B_+ + B_-$. After a quick search I was unable to find anything nontrivial specific to this source/sink separation of the incidence matrix, only the incidence matrix itself. Perhaps I just don't know the correct terms to search for.

Some example properties I'm interested in: given the rank and nullspace of $B$, can we conclude anything about those of $B_+$ and $B_-$ in general?

Any help on this is appreciated!

Moving Horses behind closed doors

Posted: 15 Aug 2021 08:19 PM PDT

Here's a question I've been stuck on for a while now: A stable consisting a row of $8$ stalls houses $1-8$ horses. You may check whether a horse is in a stall by opening it, but must close it immediately after checking. Every time you close a stall door, all the horses move to either the left or right stall independently. A stall can hold any number of horses, but initially each contains no more than 1 horse. How many times must you open doors to see each horse at least once?

Edit: Assume that when you open a door, you can see the number of horses behind the door and that you can recognize a horse that you've already seen before.

So my main thought process so far has to define a function $f(x)$ that asks "given that we have seen $x$ horses so far, how many more doors to we need to open to see a new horse." Then $\sum_{i=1}^7 f(i)$ is the desired answer. But the fact that the possible positions of each horse depends on where they started out makes this function harder to compute.

Conditional constraints for continuous variables

Posted: 15 Aug 2021 08:29 PM PDT

How could we model conditional constraints for two continuous variables? Suppose the two variables are: $$x\geq0$$ $$y\in\mathbb R.$$ The conditions are: if $y>0$, then $x>0$ and if $y\leq0$, then $x=0$.

Given commuting Hermitian matrices $A,B$, can I construct an operator $C$ with eigenspaces from $A$ and $B$

Posted: 15 Aug 2021 08:32 PM PDT

Suppose I have two finite dimensional, Hermitian operators $A,B : V\rightarrow V$ that commute. Let $\lambda_i$ denote the eigenvalues of $A$, and $\gamma_j$ denote the eigenvalues of $B$, without repetition. By the spectral theorem, $A$ and $B$ admit a shared eigenbasis in which they are both diagonal, and, as a consequence, one may decompose the vector space $V$ as a direct sum of the "simultaneous eigenspaces"

$$V = \bigoplus_{ij} E_{ij}^{AB},$$

where

$$v \in E_{ij}^{AB} \text{ iff } Av = \lambda_i v \text{ and } Bv = \gamma_j v.$$

My question is: can I construct, from $A$ and $B$, a single operator whose eigenspaces are $E_{ij}^{AB}$? Ideally, such an operator would be an explicit function of $A$ and $B$.

Below are two of my attempts at this problem.


My attempts at this were to try the following two operators:

$$C = A + B, \quad D = AB.$$

Indeed, we have, for $v \in E_{ij}^{AB}$:

$$Cv = (\lambda_i + \gamma_j)v \quad Dv = \lambda_i \gamma_j v,$$

meaning that $C$ and $D$ are diagonal in the shared eigenbasis of $A,B$. However, the fact that the maps $(\lambda_i, \gamma_j) \mapsto \lambda_i + \gamma_j$ and $(\lambda_i, \gamma_j) \mapsto \lambda_i\gamma_j$ aren't necessarily injective means that the eigenspaces of $C$ and $D$ might be larger than the $E_{ij}^{AB}$, and contain the $E_{ij}^{AB}$ as proper subspaces. So perhaps my problem can be solved if I can find, given a collection of eigenvalue tuples $\{(\lambda_i, \gamma_j)\}$ an injective map $f : \{(\lambda_i, \gamma_j)\} \rightarrow \mathbb{R}$, and then construct an operator function of $A,B$ that implements this map, but I am not sure how to proceed from here.


Furthermore, I note that this problem can be solved easily using the spectral theorem. Let $\{v_{ij}^1,\ldots, v_{ij}^{N_{ij}}\}$ denote an orthonormal eigenbasis for $E_{ij}^{AB}$. Then, if $f : \mathbb{N}^2 \rightarrow \mathbb{R}$ is an injective function, then

$$ X = \sum_{i}\sum_{j}\sum_{k=1}^{N_{ij}} f(i,j) v_{ij}^k \left(v_{ij}^k\right)^T,$$

is the spectral decomposition of an operator whose eigenspaces are $E_{ij}^{AB}$. However, ideally I would like an operator that is a function of $A$ and $B$, that doesn't make reference to the eigenvectors.

Automorphisms of Affine space $\mathbb R^2$ under additive structure is $GL(2,R)$

Posted: 15 Aug 2021 07:57 PM PDT

I am stuck at proving how linear transformations correspond to automorphisms. I thought of trying to show a one to one correspondence between $\operatorname{Aut}(\mathbb R^2,+)$ and $GL$($2$,$R$). I know that $2*2$ invertible matrices form one to one correspondence to linear transformation maps from $R^2$ to itself but I have no rigorous mathematical prove to conclude that these linear transformations form one to one correspondence with $\operatorname{Aut}(\mathbb R^2,+)$. Is it right if I showed Linear transformations are forming bijective homomorphism (under additive structure)? Any help or hints are appreciated.

Solving particular solutions of higher order integrals

Posted: 15 Aug 2021 07:52 PM PDT

Given the differential equation: $\frac{d^2y}{dx^2}=6x$

Integrating twice would yield the general solution: $y = x^3 +C_1x+C_2$

With the constraint $y'(0)=2$ and $y(0)=1$. The particular solution would be:

$y=x^3+2x+1$

A bit of an elementary question, but why wouldn't $2x$ cancel out when substituting $0$ in the second constraint when solving for $C_2$?

Following that: $2=3(0)^2+C_1$ from the first integration

and $1 = 0^3+2(0)+C_2$ from the second integration

Was the given correct particular solution incorrect? Or is it my understanding?

Confusion about a standard rational map $\operatorname{Proj} A[x_0,\dots,x_n] \dashrightarrow \operatorname{Proj} A[x_0,\dots,x_{n-1}]$.

Posted: 15 Aug 2021 08:29 PM PDT

From Vakil's FOAG:

Definition 6.5.1: A rational map $\pi$ from $X$ to $Y$, denoted $\pi: X \dashrightarrow Y$, is a morphism on a dense open set, with the equivalence relation $(\alpha: U \to Y) \sim (\beta: V \to Y)$ if there is a dense open set $Z \subset U \cap V$ such that $\alpha|_Z = \beta|_Z$.

An important example is the projection $\operatorname{Proj} A[x_0,\dots,x_n] \dashrightarrow \operatorname{Proj} A[x_0,\dots,x_{n-1}]$ given by $[x_0, \dots, x_n] \mapsto [x_0, \dots, x_{n-1}]$. (How precisely is this a rational map in the sense of Definition 6.5.1? What is its domain of definition?)*


For $1\le i \le n-1$, the ring homomorphisms ${({A[x_0, \dots, x_{n-1}]}_{x_i})}_0 \to {({A[x_0, \dots, x_{n}]}_{x_i})}_0$ where $\frac{x_j}{x_i} \mapsto \frac{x_j}{x_i}$ induce morphisms of schemes $$\alpha_i: D_+(x_i) \to D_+(x_i) \hookrightarrow \operatorname{Proj} A[x_0, \dots, x_{n-1}]$$ such that $\alpha_i |_{D_+(x_ix_j)} = \alpha_j |_{D_+(x_ix_j)}$.

So, I believe this may give the rational map in question, and where the domain of definition is $\bigcup_{i=1}^{n-1} D_+(x_i) \subset \operatorname{Proj} A[x_0,\dots,x_n]$.

However, I am not sure because what is going on with the $x_n$ coordinate when the example says $[x_0, \dots, x_n] \mapsto [x_0, \dots, x_{n-1}]$?

What is the usage of studying trigonometric equation?

Posted: 15 Aug 2021 08:26 PM PDT

I have solved many Q regarding trigonometry in which we have to equate LHS to RHS.

For example , $\frac{1}{\csc A-\cot A} - \frac{1}{\sin A} = + \frac{1}{\sin A} - \frac{1}{\csc A+\cot A}$ and many more.

As I see this equation and if I try to visualize this. Let us take a triangle with sides 3,4,5.
enter image description here

So , the equation that we have to prove. We can just inert the values inside the trigonometric equations (Using the triangle ) and prove with arithmetic method I.e Take Angle A as 37.

$\frac{1}{5/3-4/3} - \frac{1}{3/5} = + \frac{1}{3/5} - \frac{1}{5/3+4/5}$

If we can just prove these these two equations are equal . Then , Why is it that we change a trigonometric equation like which is in terms of sin,cos & want to change it into sec , cosec or tan or sec etc.

Ex:$sec^A * cosec^2A = tan^2 A + cot^2 A + 2$.Here , it is asked to change from sec,cosec to tan,cot.

What is the usage of this conversion if we could just prove that the equations are numerically equal ? What is its usage in future careers ? In which fields is this used ?

Also , is there a different way to visualize these Q regarding proving the equations.

PDF of sum of two exponential random variables multiplied by a constant

Posted: 15 Aug 2021 08:38 PM PDT

I am trying to find PDF that involves sum of two i.i.d exponential random variables multiplied by a constant i.e.,

$\gamma = c_1 X_1 + c_2X_2$ ----(1)

where both $c_1,c_2$ are constant and $X_1,X_2$ are exponential random variable. This is what I had tried:

Let us represent (1) as: $\gamma = \gamma_1 + \gamma_2$. Therefore PDF of $\gamma_1$, $\gamma_2$ can be written as $f_{\gamma_1}(x_1) = \frac{1}{c_1\sigma_1^2}\text{exp}\left(\frac{-x_1}{c_1\sigma_1^2}\right)$ and $f_{\gamma_2}(x_2) = \frac{1}{c_2\sigma_2^2}\text{exp}\left(\frac{-x_2}{c_2\sigma_2^2}\right)$ respectively.

Next using convolution we have

$f_{\gamma}(x) = \int_{-\infty}^{{\infty}}f_{\gamma_1}(x_1)f_{\gamma_1}(x-x_1) \text{dx}$.

On solving this, finally we get

$f_{\gamma}(x) = \frac{1}{c_1\sigma^2_1} \text{exp}\left(\frac{x_1}{c_2\sigma^2_2}-\frac{x_1}{c_1\sigma^2_1}\right)$....(2)

My query is that whether the PDF obtained in (2) correct?

Any help in this regard will be highly appreciated.

With $(AB^{-1})^T$, what comes first, the multiplication or the transpose?

Posted: 15 Aug 2021 08:13 PM PDT

In part of the question we have $(AB^{-1})^T$.

My first thought was that I multiply $A$ and $B^{-1}$, then apply the transpose.

But according to theory $(AB)^T = (B)^T(A)^T,$ so this says I should get the transpose first, then multiply.

I'm confused about either sticking to the theory or what I did was right.

Smallest $m>1$ such that the number of Collatz steps needed for $238!+m$ to reach $1$ differs from that for $238!+1$. (Known: $10^9 < m < 10^{94}$)

Posted: 15 Aug 2021 07:58 PM PDT

Let $h(x)$ be the number of steps needed for $x$ to reach $1$ in the Collatz/3n+1 problem. I found that

$$h(238!+n)=h(238!+1), \;\; \forall 1 < n \leq 690,000,000$$ Here "!" is the standard factorial.

This is a lot of consecutive terms with the same height and beats the current record by far. Now I am wondering:

What is the smallest $m > 1$, such that $h(238!+m) \neq h(238!+1)$?

I don't know of an efficient way of finding it. We know that $h(2\cdot(238!+1))=h(238!+1)+1$, so $m \leq 238!+1$, but that's a rather large upper bound.

UPDATE 16/08/2021: Martin Ehrenstein has found that $10^9 < m < 10^{94}$. See A346775.

Points on the hypotenuse of a right-angled triangle

Posted: 15 Aug 2021 07:59 PM PDT

Points $K$ and $L$ are chosen on the hypotenuse $AB$ of triangle $ABC$ $(\measuredangle ACB=90^\circ)$ such that $AK=KL=LB$. Find the angles of $\triangle ABC$ if $CK=\sqrt2CL$.

enter image description here

As you can see on the drawing, $CL=x$ and $CK=\sqrt2x$.

I don't know how to approach the problem at all. Since $\measuredangle ACB=90^\circ$, it will be enough to find the measure of only one of the acute angles. If $\measuredangle ACK=\varphi_1$ and $\measuredangle BCL=\varphi_2$, I have tried to apply the law of sines in triangle $KCL$, but it seemed useless at the end. Thank you! I would be grateful if I could see a solution without using coordinate geometry.

Proof limit and integral of sequence of continuous functions interchangeable

Posted: 15 Aug 2021 08:09 PM PDT

I want to proof the following theorem.

Let $f_n: \Omega \subset \mathbb{R} \to \mathbb{R} $ be a sequence of continuous functions, $ [a,b] \subset \Omega \,$, $f_n \to f $ uniformly convergent on $\Omega \,$, $f$ on $[a,b]$ integrable.

$$ \lim_{n \to \infty} \int_a^b (f_n(x)) \,dx = \int_a^b (\lim_{n \to \infty} f_n(x)) \, dx$$

(Note that integrability is already given in this version of the theorem.)

I'm not really sure how to show this at all. What must be proven, so that I can interchange limits and Integrals?

Counterexamples to Banach Fixed Point (Banach's Contraction) Theorem with relaxed inequalities?

Posted: 15 Aug 2021 08:53 PM PDT

Banach Fixed Point theorem states: Let $(X,d)$ be a complete metric space. Suppose that $f:X→X$ is a strong contraction, i.e. there exists $q ∈ [0, 1)$ such that

$d(f(x),f(y))$ $\le$ $q$ $d(x,y)$, then there is a unique point $x_0∈X$ s.t. $f(x_0)=x_0$

My questions are:

1- If we allow $q$ to be equal to $1$, does the theorem fail? Could someone provide an example?

2- If we substitute the strong contraction condition with the following condition: $d(f(x),f(y))$ $<$ $d(x,y)$, does the theorem fail? example?

No comments:

Post a Comment