Saturday, May 21, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


How is the Wilson-Han-Powell SQP algorithm applied?

Posted: 21 May 2022 04:54 PM PDT

Say for example we need to minimize $x_2$ subject to $x_1^2+x_2^2-1=0$ starting at $x_1=x_2=1/2$ and using $B=\nabla^2[x_2+\lambda(x_1^2+x_2^2-1)]$ with $\lambda=1$.

Now, the WHP-SQP algorithm goes like this,

Choose an initial point $x_0$ and an initial matrix $B_0$

Repeat for $k=0,1,2,...$: Obtain $p_k$ and $\lambda_{k+1}$ by solving the QP subproblem Minimize $\frac{1}{2}p^TB_kp+\nabla F(x_k)^Tp$ subject to $c_i(x_k)+\nabla c_i(x_k)^Tp=0$ for $i=1,...,l$.

Obtain a new point $x_{k+1}=x_k+sp_k$ via a line search.

Obtain $B_{k+1}$ by a quasi-Newton update of $B_k$ until $T(x_{k+1},\lambda_{k+1})<\epsilon$.

Here $T(x,\lambda)=||\nabla F(x)-\sum\lambda_i\nabla c_i(x)||+\kappa||c_i(x)||$ where $\kappa$ is a positive weighting parameter, but I'm not too concerned with computing this last part

I just need an example of one iteration of how this algorithm works

Expected return of the market portfolio

Posted: 21 May 2022 04:48 PM PDT

I would appreciate help with how to compute the expected return of a market portfolio in the question (Exercise 3.14) below:

enter image description here

In the question it says that we shall use the data given in Exercise 3.10 so I will present that question below as well so you will be able to see the three securities and its values:

enter image description here

In the solution, they have provided the correct answers as $w=[0.438 , 0.012 , 0.550]$ and $\mu_m=0.183$, where $\mu_m$ is the expected return for the market portfolio, (they also gave the solution of the standard deviation but that is not necessary for my question here). I know how they compute the weights so this solution we can ignore, however I do not manage to understand how they get the expected return of the market portfolio.

In my course literature it says that the expected return of a portfolio is given by the formula $$\mu_v=wm^T$$ where w=row matrix of the weights and $\mu_v$=row matrix of the expected returns for each asset/security. So I figured that the expected return of the market portfolio in our case would be given by $$\mu_m=w_mm^T=[0.438 , 0.012 , 0.550]\cdot [0.08 , 0.1 , 0.06]^T$$ since in Exercise 3.10 it is said that $m=[0.08 , 0.1 , 0.06]$, but this does not give the correct solutions and I can't figure out why?

Edit: When this did not work I thought it was because one might need to find the expected returns of each asset for the weights of the market portfolio and not use the given $m=[0.08 , 0.1 , 0.06]$ in Exercise 3.10 when calculating. But I did not manage to find a way to bring out new values of m and do not even know if this would have been a correct solution..

fourth-order finite difference for $(a(x)u'(x))'$

Posted: 21 May 2022 04:35 PM PDT

Previously I asked here about constructing a symmetric matrix for doing finite difference for $(a(x)u'(x))'$ where the (diffusion) coefficient $a(x)$ is spatially varying. The answer provided there works for getting a second order accurate method. What about getting a fourth-order accurate method? If I follow the same idea and apply the following fourth-order accurate formula for first derivative

$ u'_i = \dfrac{u_{i-1} - 8u_{i-1/2} + 8u_{i+1/2} - u_{i+1}}{6\Delta x}$ (1)

in succession, then I end up getting a formula for $(a u')_i'$ which involves $u_{i-2}, u_{i-3/2}, u_{i-1}, u_{i-1/2}, u_i, u_{i+1/2}, u_{i+1}, u_{i+3/2}, u_2$. It involves the mid point values $u_{i+n/2}$ and it depends on nine neighbouring values of $u_i$, which doesn't sound right for fourth order accurate scheme. For the special case of $a(x)=1$ it also doesn't reduce to the formula

$ u'' = \dfrac{-u_{i-2} + 16u_{i-1} - 30u_i + 16u_{i+1} - u_{i+2}}{12\Delta x^2}$. (2)

So what's wrong with applying (1) in succession? What's the correct approach to get a finite difference formula for $(au')'$ with fourth order accuracy?

A problem on finding cap product structure on $\mathbb {RP}^{2}$ and Klein's bottle .

Posted: 21 May 2022 04:32 PM PDT

$\mathbf {The \ Problem \ is}:$ Compute all the cap products for $\mathbb{RP}^{2}$ with $\mathbb{Z}$ and $\mathbb{Z} / 2$ coefficients. Do the same for the Klein's bottle $K$.

$\mathbf {My \ approach}:$ As $H_{0}(\mathbb {RP}^{2};\mathbb Z)=\mathbb Z$ and $H_{1}(\mathbb {RP}^{2};\mathbb Z)=\mathbb Z_{2}$ and rest homology groups are $0.$

Then for $\mathbb {RP}^{2}$ , we only need to find :

$H_{0}(M) × H^{0}(M) \to H_{0}(M)$ and $H_{1}(M) × H^{1}(M) \to H_{0}(M)$

where $M=\mathbb {RP}^{2}$

The simplicial complex is : enter image description here

But, I can't figure out how to start calculating it and also for $\mathbb Z_{2}$ co-efficients .

And, I also couldn't do anything with $K.$

$H_{0}({K};\mathbb Z)=\mathbb Z$ and $H_{1}({K};\mathbb Z)= \mathbb Z \oplus \mathbb Z_{2}.$ and rest are $0.$

Hence we only have to find :

$H_{0}(M) × H^{0}(M) \to H_{0}(M)$ and $H_{1}(M) × H^{1}(M) \to H_{0}(M)$

where $M=K.$

The simplicial complex is :enter image description here

A help is much needed, thanks in advance .

Construct subgroup of $\mathbb{Z_2} \times \mathbb{Z_2}$ where both groups involved in the direct product are not subgroups of $\mathbb{Z_2}$

Posted: 21 May 2022 04:32 PM PDT

I am tasked in an exercise to do the following:

Construct an example of a subgroup of $\mathbb{Z_2} \times \mathbb{Z_2}$ which is not of the form $K \times J$ for some $K < \mathbb{Z_2}$ and $J < \mathbb{Z_2}$


So, it's my understanding that this question is asking us to construct a subgroup of $\mathbb{Z_2} \times \mathbb{Z_2}$ of the form $K \times J$ where either $K$ or $J$ is not a subgroup of $\mathbb{Z_2}$ correct? I know it has to have the direct product construction of $K \times J$ otherwise the elements wouldn't have the same structure, as pairs of elements from each group. This is correct? I'm a little confused at the wording, it makes it sound like we are to construct a subgroup of $\mathbb{Z_2} \times \mathbb{Z_2}$ which isn't constructed via the direct product construction.

With that out of the way, the only thought that comes to mind on potentially meeting this requirement is to have $K$ or $J$ be a subgroup of a larger group of modular arithmetic that is isomorphic to $\mathbb{Z_2}$... but if it's isomorphic to the group in question then it's just labels that distinguish it from $\mathbb{Z_2}$ and is really a distinction without a difference.

The subgroups of $\mathbb{Z_2}$ are just the trivial subgroups: $\mathbb{Z_2}$ itself and $\{[0]\}$. So I don't really see how one can get a subgroup of $\mathbb{Z_2} \times \mathbb{Z_2}$ without having each group $K,J$ for $K \times J < \mathbb{Z_2} \times \mathbb{Z_2}$, be a subgroup of $\mathbb{Z_2}$.

Any thoughts on how best to proceed?

Prove that the direct product of $2$ subgroups is a subgroup

Posted: 21 May 2022 04:32 PM PDT

I have an exercise where I am tasked to prove that for $2$ subgroups $K < G$ and $J < H$ of $2$ groups $G,H$ the following is a subgroup: $$K \times J \subset G \times H$$

I believe I have done so, though I would appreciate some verification that my reasoning is correct.


Proof

The set $K \times J$ is defined as $\{ (k,j) \mid k \in K, j \in J\}$ where $K < G$ and $J< H$. We must show closure of products and inverses in $K \times J$ to show $K \times J < G \times H$.

  1. Let $(k,j)$, $(k',j') \in K \times J$. Then we have $$(k,j)(k',j') = (kk', jj') \in K \times J$$ Since $kk' \in K$ and $jj' \in J$
  2. Let $k^{-1} \in K$ and $j^{-1} \in J$ Then we have:$$(k,j)(k^{-1},j^{-1}) = (kk^{-1}, jj^{-1}) = (e,e)$$ $$(k^{-1},j^{-1})(k,j) = (k^{-1}k, j^{-1}j) = (e,e)$$ Since $kk^{-1} = k^{-1}k = e \in K$ and $jj^{-1} = j^{-1}j = e \in J$. Thus $(k^{-1},j^{-1}) \in K \times J$. Hence $(K \times J) < G \times H$. $\square$

Does this look alright?

Mixing metric distance with Euclidean metric on $\mathbb{R}$

Posted: 21 May 2022 04:24 PM PDT

By far the most elegant proof of uniqueness of limits in a metric space I've seen (though I can't find the stack exchange link where it originated) is as follows.

Let $X$ be a metric space, $(p_n)$ a sequence in $X$, $p,p'$ limits. For any $n$, we have $$ d(p,p') \leq d(p,p_n) + d(p_n, p'). $$ As $d(p,p_n) \to 0$ and $d(p_n, p') \to 0$, $d(p, p_n) + d(p_n, p') \to 0$, so by the squeeze lemma, $d(p,p') \to 0$. So $d(p,p') = 0$.

I believe the steps in this proof make sense to me. It uses the algebraic limit theorem to argue that the limit of a sum equals the sum of the limits on the right-hand side, the fact that distance is nonnegative, the squeeze lemma, and then the fact that $d(p,p')$ is constant over $n$, so if $d(p,p') \to 0$, $d(p,p') = 0$.

My question is: this proof seems to use the Euclidean metric on $\mathbb{R}$, regarding $d(p,p_n)$ and $d(p_n, p')$ as sequences in $\mathbb{R}$ in their own right. Why are we allowed to use the Euclidean metric? I believe that all metrics on $\mathbb{R}$ end up being equivalent, so this may not be a problem in $\mathbb{R}$, but what if I regarded them as sequences in $\mathbb{C}$, where I believe that isn't the case? In other words, is there any role played by the metric that is used, and does it in any way sacrifice generality to "choose" a metric on $\mathbb{R}$/$\mathbb{C}$?

Inequality relating entropy to mutual information

Posted: 21 May 2022 04:39 PM PDT

Let $\{X_n\}$ be a sequence of independent, discrete random variables, and let $Z$ be another discrete random variable. Show that $$H(Z)\geq\sum_{i=1}^\infty I(X_i;Z)$$ where $H$ is the entropy and $I$ is the mutual information.

I really don't know where to start with this. Of course there is the basic fact (practically by definition) that $H(Z)\geq I(X_i;Z)$ for each $i$ but I don't see how to improve this, especially since we're given practically no information about the relationship between $\{X_n\}$ and $Z$.

UPDATE ON ATTEMPT: Fix $n\geq1$. Then by the chain rule for mutual information $$I(X_1^n;Z)=\sum_{k=1}^nI(X_k;Z|X_1^{k-1}).$$ Then we have by definition $$I(X_k;Z|X_1^{k-1})=H(X_k|X_1^{k-1})-H(X_k|Z,X_1^{k-1}).$$ Because of the independence of the $X_i$, $H(X_k|X_1^{k-1})=H(X_k)$. If we have $H(X_k|Z,X_1^{k-1})=H(X_k|Z)$ then we have shown that for any $n\geq1$ we have $$I(X_1^n;Z)=\sum_{k=1}^nI(X_k;Z).$$ Using the fact that $H(Z)\geq I(X_1^n;Z)$ then gives the result (I think) when taking $n\to\infty$ (although this feels a bit iffy to me).

However the equality $H(X_k|Z,X_1^{k-1})=H(X_k|Z)$ is true if and only if $X_k$ and $X_1^{k-1}$ are conditionally independent given $Z$. I don't think that this is necessarily true, so is there a problem with my approach?

Is the following statement about sequences true?

Posted: 21 May 2022 04:50 PM PDT

If the sequence $\{a_n + b_n\}$ converges and $\{a_n\}$ also converges then so does $\{b_n\}$

what is the relationship between density and connected graphs?

Posted: 21 May 2022 04:47 PM PDT

What is the minimum density for an undirected network of size N to be connected? What is the maximum?

Here is my understanding: The number of potential edges in an undirected graph is N(N-1)/2, where N is the number of nodes. G is connected is every pair of vertices in G is connected.

I am not sure the math related to connected graphs.

Probability of winning a radio broadcasting contest

Posted: 21 May 2022 04:16 PM PDT

Probability of winning a radio broadcasting contest

  • My teacher said to make 2-3 paragraphs on what chance do we have on winning the contest, I didn't really understand it can someone give me an example

n divides $m+1, m^m+1, m^{m^m}+1,...$

Posted: 21 May 2022 04:22 PM PDT

Prove that for each positive integer n, there is a positive integer m such that each term of the infinite sequence $m+1, m^m+1, m^{m^m}+1,...$ is divisible by n.

The only thing I could work out was that the sequence is all even if $m$ is odd, so $n=2$ works. I would be pleased if you can help!

Equivalent Definitions of Countable Sets

Posted: 21 May 2022 04:14 PM PDT

$\newcommand{\naturalset}{\mathbb{N}}$ $\newcommand{\range}[1]{\operatorname{range}\left(#1\right)}$ I knew of two definitions of countable sets:

(1) A set $S$ is countable if and only if $S \approx \mathbb{N}$ (countably infinite) or $S \approx n$ for some $n \in \mathbb{N}$ (finite).

(2) A set $S$ is countable if and only if there exists an injective function $f: S \to \mathbb{N}$.

I tried to prove if these two definitions are equivalent, and I am checking if my proof is correct.

First of all, we need to following theorem:

Suppose $A \subseteq \mathbb{N}$. Then either $A$ is finite (in the first sense) or $A \approx \mathbb{N}$.

Proof:

First assume that $S$ is countable. Then either there exists some $n \in \naturalset$ such that $S \approx n$ or $S \approx \naturalset$. In either case, there exists a function $f: S \to \mathbb{N}$ which is injective. Thus we conclude that coutability indicates existence of an injective function.

Next assume that there exists a function $f: S \to \mathbb{N}$ which is injective. Then $f$ can be either surjective or not.

First assume that $f$ is surjective. Then $S \approx \naturalset$ and $S$ is countably infinite.

Next assume that $f$ is not surjective. Then $\range{f} \subseteq \naturalset$ and $\range{f} \neq \naturalset$. Then with the theorem stated above, either $\range{f}$ is finite or $\range{f} \approx \naturalset$.

First assume that $\range{f}$ is finite. In this case, there exists some $n_{0}$ such that $\range{f} \approx n_{0}$. Using the fact that $S \approx \range{f}$, we have $S \approx n_{0}$. Thus, $S$ is finite.

Now assume that $\range{f} \approx \naturalset$, again using the fact that $S \approx \range{f}$, we have $S \approx \naturalset$, and $S$ is countably infinite.

Convex games and a convex function $f:\mathbb{N}\to\mathbb{R}$

Posted: 21 May 2022 04:16 PM PDT

I'm stocked with this exercise in Game Theory...

A function $f:\mathbb{N}\to\mathbb{R}$ is called convex if $\forall i,j,k\in\mathbb{N}$ such that $i\le j\le k$ and $j<k$,

$f$ satisfies $\left(k-i\right)f\left(j\right)\le\left(k-j\right)f\left(i\right)+\left(j-i\right)f\left(k\right).$

A coalitional game is called convex if $\forall S,T\subseteq N$, we have $\nu\left(S\right)+\nu\left(T\right)\le\nu\left(S\cup T\right)+\nu\left(S\cap T\right)$.

Let N be a set of players auch that for all $S\subseteq N$ we have $\nu\left(S\right)=f\left(\left|S\right|\right).$

Prove that $\left(N,\nu\right)$ is a convex game iff $f$ in convex.

Thanks for helping

Evaluating $\sum_{k=0}^{n}\binom{n}{k}\binom{k}{r}\binom{k}{s}(-1)^{k}$

Posted: 21 May 2022 04:13 PM PDT

So far I've been able to determine that if $n, r, s$ are nonnegative integers, then $$ \sum_{k=0}^{n}\binom{n}{k}\binom{k}{r}\binom{k}{s}(-1)^{k} = \begin{cases} 0 &\qquad\text{ if } r+s < n, \\ \displaystyle (-1)^{n}\binom{n}{r} &\qquad\text{ if } r+s=n, \\ ?? &\qquad\text{ if } r+s>n. \end{cases} $$ I am wondering if there is a way to give a "closed form answer" for the sum in the case where $r+s>n$. WolframAlpha doesn't seems to give me anything sensible. Any ideas would be appreciated.

Perhaps I should say that this sum closely resembles a somewhat known identity given by $$ \sum_{k=0}^{n} \binom{n}{k}\binom{k}{r}x^{k} = x^{r}(1+x)^{n-r}\binom{n}{r}. $$ Plugging $x=1$ and $x=-1$ reduces this to some interesting looking identities involving binomial coefficients. However, the sum I'm trying to evaluate seems to be much harder to figure out.

Proof:How many continuous function on [0;1] that verifies:$f(x)=f(x/2)\frac{1}{\sqrt{2}}$

Posted: 21 May 2022 04:24 PM PDT

Question:
I didn't find any solution of this question on internet and i wanted to know if my demonstration is correct:
How many continuous function on $[0;1]$ that verifies:$f(x)=f(x/2)\frac{1}{\sqrt{2}}$

Edit: i know that the weakest part of my demonstration is the part (II/)

Answer:
Only one function $f(x)=0$. Let proove it.

I/First if $\forall x\in [0;1] \; f(x)=0 $ so of course $f(x/2)=0$ and so we have: $0=f(x)=f(x/2)*\frac{1}{\sqrt{2}}=0*\frac{1}{\sqrt{2}}=0 $

II/Segondly let proove that if: $\exists 0 \leq x_0 \leq 1 \; s.t. \; f(x_0)=0 \Rightarrow f(x)=0 \; \forall x \in [0;1] $
If such, without loose of generallity, $x_0 \notin \mathbb{Q}$ exist so at least in all the point of the form $x_0/2^n \Rightarrow f(x)=0$ . And all this point are irrational(rationnal*irrational is irrational number). Moreover it always exists an irrational number (in particularly an irrational of the form $x_0/2^n$) between two rationnal number and a rational number between two irationnal number. So f(x) must be equal to zero on all the irrational and irrational numbers in order to be continuous.

III/Now let proove that it doesn't exist any other continuous function that verifies: $f(x)=f(x/2)*\frac{1}{\sqrt{2}}$
By absurd let suppose that it exists a continuous function on [0;1] s.t. $f(x)=f(x/2)*\frac{1}{\sqrt{2}}$ . From what we wrotte just before such $f(x) \neq 0 \forall x \in [0;1]$

  1. Let build this sequence: $x_0=1$ and $x_n=x_{n-1}/2=1/2^n, \; n \geq 1$ . It is obvious that: $x_n\underset{n\rightarrow \infty}{\rightarrow} 0$
  2. $y_n=f(x_n)=\frac{1}{\sqrt{2}}f(x_{n-1})=\frac{1}{\sqrt{2}}y_{n-1}=\frac{1}{(\sqrt{2})^n}y_{1}=f(x_1)*\frac{1}{(\sqrt{2})^n} \underset{n\rightarrow \infty}{\rightarrow} 0 $
  3. By continuity property $f(0)=lim\underset{n\rightarrow \infty} f(x_n)=0$. So according to (II/) because it exists a ppint $x_0=0$ where $f(x_0)=0$ so $f(x)=0 \; \forall x \in [0;1]$ . In contradiction with the absurd assumption.

Is it correct?

Q.E.D

"Sentences" and "Formulas" in the Stanford Encyclopedia of Philosophy

Posted: 21 May 2022 04:33 PM PDT

I have quite a bit of issues with an article about classical logic in the Stanford Encyclopedia of Philosophy (SEP) and I am not sure if it is me or the article:

We now introduce a deductive system, D, for our languages. As above, we define an argument to be a non-empty collection of sentences in the formal language, one of which is designated to be the conclusion. If there are any other sentences in the argument, they are its premises.[1] By convention, we use "Γ", "Γ′", "Γ1", etc, to range over sets of sentences, and we use the letters "ϕ", "ψ", "θ", uppercase or lowercase, with or without subscripts, to range over single sentences. We write "Γ,Γ′" for the union of Γ and Γ′, and "Γ,ϕ" for the union of Γ with {ϕ}.

We write an argument in the form ⟨Γ,ϕ⟩ , where Γ is a set of sentences, the premises, and ϕ is a single sentence, the conclusion. Remember that Γ may be empty. We write Γ⊢ϕ to indicate that ϕ is deducible from Γ, or, in other words, that the argument ⟨Γ,ϕ⟩ is deducible in D. We may write Γ⊢Dϕ to emphasize the deductive system D. We write ⊢ϕ or ⊢Dϕ to indicate that ϕ can be deduced (in D) from the empty set of premises.

"we define an argument to be a non-empty collection of sentences in the formal language, one of which is designated to be the conclusion. If there are any other sentences in the argument, they are its premises."

As far as I know, we are not allowed to write any sentence not using the formal language, so what are those premises? Do they mean "other open formulas" instead of "sentences"? Be [1] leads to

"It is possible to develop a system which allows open formulas to appear in arguments. This would have a direct impact on our treatment of the quantifiers, as we would have to be much more careful about which variables we were using. Allowing formulas to appear in arguments simplifies some things and complicates others."

Note, that now they say "open formulas" at one point and "formulas" later. Other documents I read say a "term" is a thing that is not truth-apt, while a formula is truth-apt (so basically a closed formula).

In the second paragraph, they write "argument in the form ⟨Γ,ϕ⟩ , where Γ is a set of sentences, the premises, and ϕ is a single sentence, the conclusion", but I thought from above that one those "other sentences" are its premises... What is going on here?

The issue I have here, is that the SEP does not seem to get "sentence", "formula", "open formula" straight in their own text - or is it me who is not getting it?

After all the SEP is peer-reviewed and everything.

How to find distance between circles inside a square which acts as infinite dimension

Posted: 21 May 2022 04:16 PM PDT

enter image description here

so i have this square periodic boundary and i want the distance between the two circle both directly and across the boundary considering this square as surface of a sphere so basically if a circle crosses from one side of square it should come out from another, but i just want the distances. After some search I found that my problem is related to periodic square boundary condition, so if anyone know it please explain.

Find (a generating set for) $\mathbb{Q}[x]\cap I$ where $I=\langle x^2-y,y^2-x,x^5-x^2\rangle$ (generate gröbner basis).

Posted: 21 May 2022 04:09 PM PDT

Consider the polynomial ring $\mathbb{Q}[x,y]$ and the ideal $I=\langle x^2-y,y^2-x,x^5-x^2\rangle$ in $\mathbb{Q}[x,y]$. $G=(x^2-y,y^2-x)$ is a (reduced) gröbner basis for $I$ wrt. graded lexicographic ordering.

  1. Find $\mathbb{Q}[x]\cap I$ (That is find a set of generators for $\mathbb{Q}[x]\cap I$ as an ideal in $\mathbb{Q}[x]$).

To solve this problem I believe I want to find a gröbner basis for $\mathbb{Q}[x]\cap I$. But how do I do that?

Usually for these types of problems I have some ideal $J=\langle f_1,f_2,...,f_n\rangle$. I then consider $H=(f_1,f_2,...,f_n)$ and check if $H$ is a gröbner basis for J. If it's not then I can create one by using Buchbergers Algorithm.

However, in this problem it's kind of the opposite problem. I don't know the generators for $\mathbb{Q}[x]\cap I$. Is $\mathbb{Q}[x]\cap I$ all the one-variable polynomials in $x$ that $I$ "span" ? Or what do it really mean?
how do I approach this problem?

Question: For one survey (Combinatorics)

Posted: 21 May 2022 04:27 PM PDT

For one survey 39 students from first, second and third year must be chosen, so that at least 8 students are from first year, and at least 3 students from second year. How many possible ways are there to make this?

How do we prove the limit of a sequence of real numbers is unique?

Posted: 21 May 2022 04:49 PM PDT

I wasn't sure if this proof was correct or not.

Proposition. If $(a_n) \to a$ and $(a_n) \to b$, then $a = b$.

Proof. Suppose $(a_n) \to a$ and $(a_n) \to b$.

Then, $\lim \limits_{n \to \infty}(a_n)=a$ and $\lim \limits_{n \to \infty}(a_n)=b$.

So for every $\epsilon > 0$ there exists a $N \in N$ such that $n>N$ implies $|a_n -a| < \frac{\epsilon}{2}$ and a $M \in N$ such that $n>M$ implies $|a_n-b| < \frac{\epsilon}{2}$.

So $|(a_n-a)+(a_n-b)| < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$.

So $|2a_n - (a + b)| < \epsilon$.

So $\lim \limits_{n \to \infty} 2a_n = a + b$.

So $2\lim \limits_{n \to \infty} a_n = a + b$.

So $\lim \limits_{n \to \infty} a_n = \frac{a + b}{2}$.

Case 1: $\lim \limits_{n \to \infty} a_n = a$

If $\lim \limits_{n \to \infty} a_n = a$, then $a = \frac{a+b}{2}$.

So $2a = a + b$ and $a = b$.

Case 2: $\lim \limits_{n \to \infty} a_n = b$

If $\lim \limits_{n \to \infty} a_n = b$, then $b = \frac{a+b}{2}$.

So $2b = a + b$ and $a = b$.

Therefore $a = b$.

Find a limit of a series

Posted: 21 May 2022 04:49 PM PDT

Given $x_1=2$ and $x_{n+1}=\frac{x_n+1+\sqrt{x_n^2+2x_n+5}}{2}, n\geq 2$

Prove that $y_n=\sum_{k=1}^{n}\frac{1}{x_k^2-1}, n\geq 1$ converges and find its limit.


  1. To prove a convergence we can just estimate $x_n > n$, therefore $y_n<z_n$, where $z_n=\sum_{k=1}^{n}\frac{1}{k^2-1}$ and $z_n$ converges, then $y_n$ converges too.

  2. We can notice that $x_n^2+2x_n+5=(x_n+1)^2+4$. So $x_{n+1}$ is one of the roots of the equation: $x_{n+1}^2-(x_n+1)x_{n+1}-1=0$
    So $x_{n+1}^2-1=(x_n+1)x_{n+1}$ and therefore: $y_n=\sum_{k=1}^n \frac{1}{(x_{n-1}+1)x_{n}}$

I'm stuck here.

A representation is reducible if is equivalent to another reducible

Posted: 21 May 2022 04:33 PM PDT

Let $\rho$ and $\pi$ be equivalent representations of the group G over F. Prove that if $\rho$ is reducible then $\pi$ is reducible.

Liebeck and James prove this property using matrix representantion of linear transformation, since reducible representations are equivalent to a block matrix. I tried another way:

Let $\{v_{1},..,v_{k},v_{k+1},..,v_{n}\}$ be a basis of a vector space $V=F^{n}$. Since $\rho$ is reducible, there is a invariant subspace of V under $\rho g$, for all $g\in G$, say $W=span\{v_{1},...,v_{k}\}$. Then exists $(\alpha_{ij})$ for each $g\in G$ such that $(\rho g)v_{j}=\sum^{k}_{i=1}\alpha_{ij}v_{i}$, for $i,j\in\{1,...,k\}$, and there is a linear transformation $T\in GL(V)$ such that $T(\rho g)T^{-1}=(\pi g)$.

Define $T(v_{j})=u_{j}$. Since $T$ is an isomorphism, the set $\{u_{1},...,u_{n}\}$ is linearly independent. Hence, $(\rho g)T^{-1}(u_{j})=(\rho g)v_{j}=\sum^{k}_{i=1}\alpha_{ij}v_{i}$, then

$T(\rho g)T^{-1}(u_{j})=(\pi g)v_{j}=\sum^{k}_{i=1}\alpha_{ij}T(v_{i})=\sum^{k}_{i=1}\alpha_{ij}u_{i}$. We conclude that $U=span\{u_{1},...,u_{k}\}$ is a FG-submodule of $\pi$.

Is this suffice to prove that $\pi$ is reducible?

How to show that the winding number must be $0$ or $\pm$ $1$ in this case?

Posted: 21 May 2022 04:55 PM PDT

prerequisite: $K: [0,1] → C$ piecewise, continuously differentiable, closed path that meets the non-positive real axis at only at one point $K^{-1}$({x|x ≤ 0})={$t_0$} with 0 < $t_0$< 1 and K($t_0$)= $−r < 0$ and does not meet at the origin.

Show: n(0) ∈ {0, ± 1} (where n is the winding number), and 0, 1, and − 1 occur if K runs locally at $t_0$ on one side of the negative real axis or intersects the negative real axis from top to bottom or rather from bottom to top.

I tried to prove by contradiction, that if we assume, that $n=2$, then the closed path would obviously intersect the negative real axis twice at two real numbers $w=(a,0)$ and $z=(b,0)$. Would that be enough to prove the statement?

Antiderivitive of $\sqrt[4]{r^4-x^4}$ For area of squircle [closed]

Posted: 21 May 2022 04:31 PM PDT

I was trying to find an equation for the area of a squircle ($x^4+y^4=r^4$) with the intergral of $\int_ {-r}^r \sqrt[4]{r^4-x^4} \,dx$ for the top half.

What is the antiderivitive of $\sqrt[4]{r^4-x^4}$ where $r$ is a constant?

What are some examples of "Jordan spaces" which are *not* homeomorphic to a subspace of $\Bbb R^n$ (with the Euclidean topology)?

Posted: 21 May 2022 04:17 PM PDT

Note: This question has been substantially revised, see the edit history for earlier versions.


So far, the only examples of metric spaces which I have seen in topology books are Euclidean $n$-space, subspaces of Euclidean $n$-space, and the discrete space. Obviously, it is not the case that "every metric space of cardinality $\mathfrak c$ is either homeomorphic to a subspace of Euclidean $n$-space or discrete," so I decided to come up with some counterexamples.

I started by sketching what a potential basis set might look like and trying to find a metric to fit it. I made the obvious observation that any metric topologically equivalent to the Euclidean metric on $\Bbb R^2$ generates a basis of open regions bounded by (what I now realize are) Jordan curves. So I tried to think of a metric for which this isn't the case. I noticed that any example I could think of had the property that an open set, the boundary of an open set, or the exterior of an open set would be countable. This leads me to the idea of what I am calling "Jordan spaces" (previously "non-degenerate metric topology"), which I am defining as follows follows:

Let $X$ be a set of cardinality $\mathfrak c$, and $d$ a metric on $X$. The metric space $(X,d)$ is a Jordan space if and only if for every point $x\in X$ and real number $r>0$, the sets $\{y\in X:d(x,y)<r\}$, $\{y\in X:d(x,y)=r\}$, and $\{y\in X:d(x,y)>r\}$

  1. are individually equinumerous with $X$

  2. together form a partition of $X$

(The second criterion is trivial, but important enough to applications that it's worth stating anyway.)

Note that if $d$ is any metric on $\Bbb R^2$ such that the set $\{y\in \Bbb R^2:d(x,y)=r\}$ forms a Jordan curve for any $x\in\Bbb R^2$ and real number $r>0$, then $(\Bbb R^2,d)$ is a Jordan space. This generalizes to $\Bbb R^n$ in the obvious way - in particular, any metric space which is homeomorphic to Euclidean $n$-space is a Jordan space.

Now the question is this: Is every Jordan space homeomorphic to a subspace of Euclidean $n$-space?

My immediate answer is "almost certainly not," and I have some ideas for a proof, but what I would really like is a concrete example.


Additional note: topology is not my native language, please forgive me any mistakes I make and feel free to correct my vocabulary.

Do the global minima of a continuous function vary continuously?

Posted: 21 May 2022 04:32 PM PDT

Let $\|\cdot\|$ be any norm on $\mathbb{R}^n$, and $f : \mathbb{R}^n\rightarrow\mathbb{R}$ a continuous function.

Let further $B_r:=\{x\in\mathbb{R}^n\mid \|x\|\leq r\}$ be the zero-centered closed $\|\cdot\|$-ball of radius $r\geq 0$.

Is the function $\phi:[0,\infty)\rightarrow\mathbb{R}$ given by $$ \phi(r):= \min_{y\in B_r} f(y) \quad \text{ continuous ?} $$

This seems true, but maybe you know better; any proofs, references -- or, indeed, counterexamples-- are welcome.

Proof of an identity about Cartan gauge

Posted: 21 May 2022 04:55 PM PDT

Let $G$ be a Lie group with its Lie algebra at the identity $\frak{g}$ and let $\theta$ be a $\frak g$-valued 1-form on the manifold $M$ and smooth function $g:M\to G$. Show that $$d(Ad_{g^{-1}}\theta)=Ad_{g^{-1}}d\theta+[g^*\omega,Ad_{g^{-1}}\theta]$$ where $Ad_g:\frak g\to\frak g$ is an Lie algebra isomorphism, $\omega$ is Maurer-Cartan form on $M$.

My proof is as follow. Let $X,Y$ be two tangent vectors on $M$, then

$$\text{LHS}=d(Ad_{g^{-1}}\theta)(X,Y)=X(Ad_{g^{-1}}\theta(Y))-Y(Ad_{g^{-1}}\theta(X))-Ad_{g^{-1}}\theta([X,Y])$$ $$\text{RHS}=Ad_{g^{-1}}d\theta(X,Y)+[g^*\omega,Ad_{g^{-1}}\theta](X,Y)$$ And $$Ad_{g^{-1}}d\theta(X,Y)=Ad_{g^{-1}}(X(\theta(Y))-Y(\theta(X))-\theta([X,Y]))$$ $$[g^*\omega,Ad_{g^{-1}}\theta](X,Y)=[g^*\omega(X),Ad_{g^{-1}}\theta(Y)]-[g^*\omega(Y),Ad_{g^{-1}}\theta(X)]$$

I don't know how to expand the last identity, no matter in extrinsic or intrinsic way. I have been stuck here for a whole day. Any helpful suggestions?

Show $X_1$ and $X_2$ are negatively correlated

Posted: 21 May 2022 04:11 PM PDT

Consider $n$ independent tosses of a die. Each toss has probability $p_i$ of resulting in $i$. Let $X_i$ be the number of tosses that result in $i$. Show that $X_1$ and $X_2$ are negatively correlated.

My question is how $p_i$ plays into this proof. When I proved that $X_1$ and $X_2$ are negatively correlated, I didn't see an importance in making $p_i$ a variable. Here is my work:

To say two variables are negatively correlated suggests that an increase in occurrence of one lowers the appearance of the other. Mathematically:

  • Correlation coefficient $= \rho (X_1, X_2) = \frac{\mathrm{Cov}(X_1, X_2)}{\sqrt{\mathrm{Var}(X) \mathrm{Var}(Y)}}$

We don't have to deal with the denominator since the variance of any random variable is nonnegative by design meaning the denominator will always be positive. We focus on the covariance:

  • $\mathrm{Cov}(X_1,X_2) = E[X_1X_2] - E[X_1]E[X_2]$

We can interpret $E[X_1X_2]$ as $P(X=1) P(X=2 \mid X=1)$. In words, this is the probability that the dice roll results in $1$ and also results in $2$ which is impossible since the die can only display a single number at a time. So, $E[X_1X_2] = 0$.

Thus, we are left to just proving that $-E[X_1]E[X_2]$ is negative but because $X_1$ and $X_2$ are sums of independent Bernoulli random variables, these expectations are always positive implying

  • $\mathrm{Cov}(X_1, X_2) = -(\text{some positive number})$

Proving the correlation coefficient is negative. But, what is the point of specifying that the probability of each number in separate tosses is a random value?

Convergence for log 2

Posted: 21 May 2022 04:58 PM PDT

  1. The series for $\log (1+x)$ is convergent(by estimating radius of convergence) only with $|x| <1$. Then how is it still true for $x = 1$?

  2. How is it still apparently true when $x$ is a roots of unity other than $-1$? Such as in the answers to this question: Summing up the series $a_{3k}$ where $\log(1-x+x^2) = \sum a_k x^k$

No comments:

Post a Comment