Sunday, June 27, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Distance between 3 stations

Posted: 27 Jun 2021 10:35 PM PDT

There are $3$ stations that are distributed uniformly over $10$ units of distance and I wish to find the probability of the three of them being only at most $1$ unit of distance apart from the others. I signified $X,Y,Z \sim Uni(0,10)$ and I calculated $$P(|X-Y|<1 \cup |X-Z|<1 \cup|Y-Z|<1)$$ by drawing these relations graphically and finding the ratios of the areas. The last inequality is dependent upon the others, because if $|X-Y|<1$ and $|X-Z|<1$ then $|Y-Z|<2$. Is there an easier way to solve it? I haven't received a correct answer so I was wondering if that's even a correct way to solve it.

Prove it 's a public answer . No one 's gonna explain it .

Posted: 27 Jun 2021 10:36 PM PDT

enter image description here

please explain it .

Combinatorics without repetition with order, limits and duplicates

Posted: 27 Jun 2021 10:32 PM PDT

Hey I'm new here and I came across a problem that probably has already been answered before but I couldn't find it.

I have n types of balls with different quantities, type 1 has a1 balls type 2 has a2 balls and so on. How many different ways are there to arrange K balls in a row?

Existence of equivalent norm

Posted: 27 Jun 2021 10:31 PM PDT

Let $L$ be a normed space with norm $\|\cdot\|_1: L\to \mathbf{R}$. Let $T$ be a linear invertible operator on $L$ such that $\|T^n x\|_1 < c \|x\|_1$ for all $x\in L$ and $n\in \mathbf{N}$. Show that there is a norm $\|\cdot\|_2: L\to \mathbf{R}$ equivalent to norm $\|\cdot\|_1$ such that $\| Tx\|_2=\|x\|_2$ for all $x\in L$.

My thought is to consider $\|\cdot\|_2: L\to \mathbf{R}$ such that

$\|x\|_2=\|x\|_1 + \sup_{n\in \mathbf{N}} \|T^n x\|_1$

I have proven equivalence of the two norms but I am finding it uneasy proving that $\|T\|_2=1$.

Any help would be appreciated. Is my thought in track? Thanks

Factor Analysis

Posted: 27 Jun 2021 10:30 PM PDT

Factor Analysis

I'm confused how we arrived at Fact 1 from Assumption 1 . Need help

rank and diagonalizability

Posted: 27 Jun 2021 10:30 PM PDT

I wonder if the following assertion is true :

If $A\in M_{n\times n}(\mathbb{C})$ such that $\text{rank}(A)=\text{rank}(A^2)$ then $A$ is diagonalizable .

The motivation came from the question already discussed here . I just tried to reverse the proof presented there but something appears wrong as some extra conditions might be necessary for the assertion to be held true . Am I correct about the assertion or some other proof might be presented ? Any help is appreciated .

If a,b,c,d,e are positive reals then prove the following inequality

Posted: 27 Jun 2021 10:24 PM PDT

$$\sum_{cyclic}\frac{a}{b+c}\geq\frac{5}{2}$$ My approach: I learned this technique in the book itself from which the question was taken, but doesn't quite seem to work. So,

Let, S=$\sum_{cyclic}\frac{a}{b+c}\geq\frac{5}{2}$ And we can write $$\sum_ca=\sum_c \frac {\sqrt a(\sqrt a (\sqrt {b+c}))}{\sqrt {b+c}}$$

On Applying the Cauchy Schwarz Inequality, $$\left(\sum_ca\right)^2\leq S\left(\sum_c a(b+c)\right)$$ All that remains for me to prove is that, $$\left(\sum_ca\right)^2\div\left(\sum_c a(b+c)\right) \geq \frac{5}{2}$$ I tried to prove this using the A.M.-G.M Inequality but that only helped me complicate matters. Also, I would I like to know if the cyclic sum is distributive over its elements. Thanks!

Sufficiency,ancillary and completeness

Posted: 27 Jun 2021 10:19 PM PDT

let f(x,θ)=1 , if (θ-1/2)<x<(θ+1/2). show that joint sufficient statistic (x(1),x(2)) is complete or not.

I am stuck in the pdf of joint sufficient statistic of uniform distribution in this range

Create a list of my own questions.

Posted: 27 Jun 2021 10:30 PM PDT

I am planning to create manual solutions for some famous books in Analysis, Topology, Algebra which currently do not have available online solutions. I'd like to share those solutions on this site in order to get comments from readers in case some of my solutions might be wrong or there is another better idea to solve. Is there any way to do that in math stack exchange in a way that everyone can easily search, access solutions? For example, can I create a list of questions whose title is something like "Solutions for chapter x, book y"? In that way, one just needs to access that list to see all questions and solutions for a particular chapter in a particular book. Of course, one is able to comment, discuss each solution in that list freely as usual.

Proving that $3\mathbb{Z}/6\mathbb{Z} \trianglelefteq \mathbb{Z}/6\mathbb{Z}$.

Posted: 27 Jun 2021 10:25 PM PDT

I have to prove that $3\mathbb{Z}/6\mathbb{Z} \trianglelefteq \mathbb{Z}/6\mathbb{Z}$.

I proved $3\mathbb{Z}/6\mathbb{Z} \leq \mathbb{Z}/6\mathbb{Z}$, so I have to show

$\forall A \in \mathbb{Z}/6\mathbb{Z},$ $A^{-1}+3\mathbb{Z}/6\mathbb{Z}+A \subset 3\mathbb{Z}/6\mathbb{Z}$.

(Proof)

Let $A \in \mathbb{Z}/6\mathbb{Z}$.

I can write $A=a+6\mathbb{Z}$ where $a \in \mathbb{Z}$ and then $A^{-1}=-a+6\mathbb{Z}$.

I'm stuck here.

What I have to do next is letting $B \in A^{-1}+3\mathbb{Z}/6\mathbb{Z}+A $ and showing $B \in 3\mathbb{Z}/6\mathbb{Z}.$

But when I let $B \in A^{-1}+3\mathbb{Z}/6\mathbb{Z}+A $, how can I express $B$ ?

Can I apply this elementary operation in determinants?

Posted: 27 Jun 2021 10:05 PM PDT

Given a determinant

$∆$=$\left|\begin{array}{lll}a_{1} & b_{1} & c_{1} \\ a_{2} & b_{2} & c_{2} \\ a_{3} & b_{3} & c_{3}\end{array}\right|$

Can I apply this

$R_{1} \longrightarrow R_{1} \cdot R_{2}$

So that the value of $∆$ doesn't change.

Convergence of $\int_0^\pi \frac{\tan x}{x}dx$

Posted: 27 Jun 2021 10:14 PM PDT

I am stuck with the following problem on improper integrals:

Examine the convergence of $\int_0^\pi \frac{\tan x}{x}dx$

I am not sure how to proceed.What I have in hand is comparison test,$\mu$-test,Cauchy criterion.

Fourier transform of $\frac{t}{(t^2+1)^2}$

Posted: 27 Jun 2021 10:11 PM PDT

Can you help me with what steps to follow to find the Fourier transform of $\frac{t}{(t^2+1)^2}$.

I know it will be solved on the lines of time-frequency duality but cannot seem to get the exact steps for it. Any help will be appreciated.

Limit of the convolution of two functions as the product of the integral of one and the limit of the other

Posted: 27 Jun 2021 10:34 PM PDT

Let $\lambda, h : \mathbb{R} \to \mathbb{R}$ and suppose $\int_{-\infty}^\infty h(t)\,dt$ exists and $\lim_{t \to \infty}\lambda(t)$ exists. I want to show that the limit of their convolution equals the product of the integral under one and the limit of the other:

$$\lim_{t \to \infty} (\lambda*h)(t) = \lim_{t \to \infty} \int_{-\infty}^\infty h(\tau) \lambda(t-\tau) \,d\tau = \left(\int_{-\infty}^\infty h(t)\,dt\right)\left( \lim_{t \to \infty} \lambda(t) \right). $$ The reason why I think this should be the case is that since $\int_{-\infty}^\infty h(t)\,dt$ is finite, $h(t)$ has to converge to 0 as $t \to \infty$ and as $t \to -\infty$, thus the only parts of $\lambda$ that contribute to the convolution integral are those "finitely near" to $t$, which as $t \to \infty$, become a constant value, $\lim_{t \to \infty}\lambda(t)$. So in the limit this constant can be put in front of the integral and the result follows. However I don't know how I would go about formally proving this (and even whether it is always true given the conditions I gave).

The reason why I want to show this is because in my case, both $\lambda$ and $h$ map from positive reals to positive reals, and $\lambda$ is the "input" and $h$ is the "impulse response", and I basically want to have a way of showing what the limiting/steady-state response will be if I know what the input will converge to be.

Proving One-to-One Function

Posted: 27 Jun 2021 09:49 PM PDT

I have been reading Daniel Cunningham's Set Theory: A First Course, and am stuck at this problem which is cited by a proof in the next chapter, so I can't skip this problem without also getting stuck at the next chapter.

Let $f: \omega \times \omega \rightarrow \omega $ be defined by $f (i, j) = 2^i \cdot 3^j$ . Prove that f is one- to-one. Prove that if i < m and j < n, then $f (i, j) < f (m, n)$.

Would really appreciate some hints to solve the problem!

Form of $D^{\frac{1}{k}}$ of a diagonal matrix $D$

Posted: 27 Jun 2021 09:48 PM PDT

Let $D$ be a diagonal matrix i.e., $D=\operatorname{diag}(d_1, \cdots, d_n)$. Then is there any formula or process of computing $D^{\frac{1}{k}}$?

I know one solution, i.e., $D^{\frac{1}{k}}= \operatorname{diag}(d_1^{\frac{1}{k}}, \cdots, d_n^{\frac{1}{k}})$ but after consideration of $I_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$, I see $D^{\frac{1}{k}}$ might contain non-diagonal matrix, so I wonder is there any formula or systematic way to computing them.

For $I_2 = I_2 \cdot I_2$ and $I_2 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$.

Doubt about arbitrary constants in solutions to ODEs

Posted: 27 Jun 2021 09:41 PM PDT

I've been reading about ODEs and find most of the material frustratingly unrigorous. It is claimed that an $n$-th order ODE has $n$ arbitrary constants in its solution. The standard justification is that if we have $F\left(x,f(x),f'(x),\cdots,f^{(n)}(x)\right)=0$ then integrating $n$ times gives $n$ constants on the right. I do not see how that helps. Why not integrate it more times? You'd get more constants, but you wouldn't have to write the solutions using that many constants. So why do the constants all have to make an appearance when you integrate $n$ times? I can understand why you would have $n$ constants if your equation can be expressed as $f^{(n)}(x)=F(x,f(x),f'(x),\cdots,f^{(n-1)}(x))$ with $F$ differentiable and all solutions are analytic and the derivatives of $F$ have nice properties when divided by $n!$ and some other stuff that would depend on the domain. This gives you that $f^{(n)}$ is differentiable, so you add more derivatives to everything inside $f$ to get that $f^{(n+1)}$ is differentiable and so on to give you infinitely many derivatives, all of which may be found in terms of arbitrary values for $f(x)$, $f'(x)$, ... $f^{(n-1)}(x)$, and the sequence of derivatives defines a unique candidate solution. But even then, for it to be defined on all of your domain of interest, or for that matter, any more than a single point, you still need the power series to converge.

Location of roots for a special class of polynomials in $\mathbb{Z}[X]$

Posted: 27 Jun 2021 09:54 PM PDT

I am reading "Prime numbers and Irreducible Polynomials" by M. Ram Murty published in American Mathematical Monthly 2002. I have a question on a result which is as follows

Statement:

Let $f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$ belong to $\mathbb{Z}[X]$. Suppose that $a_n\ge1, \ a_{n-1}\ge 0 \ $ and $|a_i|\le H$ for $i=0,1,...,n-2,$ where $H$ is some positive constant. Then any complex zero $\alpha$ of $f(x)$ either has $\mathfrak{R}(z)\le 0$ or satisfies $$ |\alpha| < \frac{1+\sqrt{1+4H}}{2} \ \ \ \ \ \ \ \ \ \ (1) $$

Proof:

If $|z|>1$ and $\mathfrak{R}(z)>0$ we observe that $$ \left|\frac{f(z)}{z^n}\right| \ge \left|a_n+\frac{a_{n-1}}{z}\right|-H\left(\frac{1}{|z|^2}+\cdots+\frac{1}{|z|^n}\right) $$ $$ >\mathfrak{R}\left(a_n+\frac{a_{n-1}}{z}\right)-\frac{H}{|z|^2-|z|} $$ $$ \ge 1-\frac{H}{|z|^2-|z|} = \frac{|z|^2-|z|-H}{|z|^2-|z|} \ge 0 $$ whenever $$ |z| \ge \frac{1+\sqrt{1+4H}}{2} \ \ \ \ \ \ \ \ \ \ (2) $$ Consider an arbitrary complex zero $\alpha$ of $f(x)$. If $|\alpha|\le 1$ then (1) holds trivially. Assume that $|\alpha>1|$. Either $\mathfrak{R}(z)\le 0$ or $(2)$ must fail for $z=\alpha$, since $\left|\frac{f(z)}{z^n}\right|$ is positive whenever $\mathfrak{R}(z)>0$ and (2) holds.

Thus either $\mathfrak{R}(z) \le 0$ or $\alpha$ satisfies (1).


My question:

  1. How is this inequality obtained? $$ \left|\frac{f(z)}{z^n}\right| \ge \left|a_n+\frac{a_{n-1}}{z}\right|-H\left(\frac{1}{|z|^2}+\cdots+\frac{1}{|z|^n}\right) $$

  2. How to obtain this inequality? $$ \left|a_n+\frac{a_{n-1}}{z}\right|-H\left(\frac{1}{|z|^2}+\cdots+\frac{1}{|z|^n}\right) >\mathfrak{R}\left(a_n+\frac{a_{n-1}}{z}\right)-\frac{H}{|z|^2-|z|} $$

  3. We are using the assumptions $a_n \ge 1$ and $a_{n-1} \ge 0$ to obtain the inequality: $$ \mathfrak{R}\left(a_n+\frac{a_{n-1}}{z}\right)-\frac{H}{|z|^2-|z|} \ge 1-\frac{H}{|z|^2-|z|} $$ Can we weaken the restrictions on $a_n$ and $a_{n-1}$?

Generalization of weak law of large number

Posted: 27 Jun 2021 10:37 PM PDT

In exercise5.2.12 of Chung Kai lai's "A course in probability theory", there is a question:

Let $\{X_n\}$ be pairwise independent with a common distribution function $F$ (namely, identically distributed) such that

(i) $\int_{|x|\leq n}x \, dF(x)=o(1)$

(ii) $n \int_{|x|>n} \, dF(x)=o(1)$

then $\frac{\sum_{j=1}^{n}X_{j}}{n}\rightarrow 0 $ in probability.

from condition (i) we can get $\operatorname E(X_j)=0,$ my idea is to get a bound of $\operatorname E(X_j^2)$ but failed, and another thought is to try to construct an equivalent sequence of random variables to $\{X_n\}$ by truncating $X_n$ at $n$ as the author did in this section but also failed. The problem is that I don't know how to apply condition (ii) and find no way to understand it. Can someone give me some hint on the interpretation of (ii) or some method to get the result? Thanks in advance!

A different approach to the question: Determine max $|z^3 − z + 2|$. $z∈\Bbb C, |z|=1 $

Posted: 27 Jun 2021 10:26 PM PDT

Determine max $|z^3 − z + 2|$.
$z∈\Bbb C, |z|=1 $

Now, quite clearly, substituting $z = x + iy$ makes the question easy. The solution is not very difficult after that. However, is there some other (shorter or longer) method to solve it and is a geometrical perspective possible?

Finally, have you seen something similar being used practically? These questions to maximize an expression are quite common.

Does differentiating a differentiable function give a differentiable function?

Posted: 27 Jun 2021 09:54 PM PDT

I want to know that if a differentiable function is differentiated, it is still differentiable.

I mean, let $f \ $ be a differentiable function, and $f' \ $ its derivative. Is $f' \ $ always also a differentiable function?

Laurent series of $\frac{-z-1+e^z}{z^4}$ around z=0

Posted: 27 Jun 2021 10:09 PM PDT

I try to find the Laurent expansion of $\frac{-z-1+e^2}{z^4}$ around z=0, but i can't find a way, maybe I can find some expresion of $\frac{1}{1-u}$ idk, can you give me some hints?

metric spaces with certain topological properties

Posted: 27 Jun 2021 10:11 PM PDT

For each of the following metric space $A$, determine whether $A$ is separable, whether $A$ is complete, whether $A$ is compact, and whether $A$ is connected.

  1. $A = \{a \in \ell_1(\mathbb{R}) : \|a\|_1 = 2\|a\|_2\}$ in $(\ell_1(\mathbb{R}), d_1)$.
  2. $A = \{a \in \ell_\infty(\mathbb{C}) : |a_k| = 1\,\forall k \in \mathbb{Z}^+\}$ in the space $(\ell_\infty(\mathbb{C}), d_\infty),$ where $d_\infty$ is the supremum metric.
  3. $A = \{a_0+a_1x+\cdots + a_nx^n : n\geq 0, a_i \in \{0,\pm 1\}\,\forall i\}$ in the space $((\mathcal{C}[0,1],\mathbb{R}), d_2),$ where $d_2(f,g) := (\int_0^1 (f-g)^2)^{1/2}.$

I think $1$ is closed (it's the inverse image of $0$ under the uniformly continuous function $f(x) = \|x\|_1 - 2\|x\|_2$). It's also a subspace of the space $\ell_1$ which is separable and hence $A$ is separable. Since $\ell_1$ is also complete and $A$ is closed in $\ell_1,A$ is complete. I know how to show that $A$ is not bounded. Also, it seems the function $\alpha : [0,1] \to A, \alpha(x) = tx$ is a path on $A$.

For $2,$ I think it is not separable; one could define for each subset $A\subseteq 2^\mathbb{N}, e_{A} = (e_{A,k})_{k\geq 1}$ by $e_{A,k} = 1$ if $k\in A$ and $0$ otherwise and then obtain an injection from $2^\mathbb{N}$ to any dense subset. I also think it's closed, and since $\ell_\infty$ is complete, this'll show it's complete. But I'm not sure whether it's compact or connected.

For $3$, I think the set is actually countable; it's the countable union of sets $\{a_0+a_1x^1+\cdots + a_nx^n : a_i \in \{0,\pm 1\}\}$. Thus a countable dense subset could be itself. I think the sequence $f_n(x) = \sum_{i=0}^n x^i$ does not converge in the set, but I'm not sure whether this is useful as the metric space $(\mathcal{C}[0,1],d_2)$ is not complete. Also, I'm not sure whether this is compact or connected. I know that compact sets in metric spaces are totally bounded and complete, so this might be useful.

Even Ordered Graph Without a Two Set Partition of Induced Subgroups with Odd Ordered Vertices

Posted: 27 Jun 2021 10:02 PM PDT

Suppose G = (V,E) is a connected graph of positive even order that can't be partitioned into two induced subgraphs G[S] and G[V-S] where each vertex in G[S] and G[V-S] has odd order. What is the order of G?

example of $H^1(X,\mathbb{Z}/n\mathbb{Z})$ is not isomorphic to $H^1(X,\mathbb{Z})\bigotimes \mathbb{Z}/n\mathbb{Z}$ as $\mathbb{Z}$-module

Posted: 27 Jun 2021 10:01 PM PDT

Let $X$ be topological space. I'm looking for a example

$H1(X,\mathbb{Z}/n\mathbb{Z})$ is not isomorphic to $H1(X,\mathbb{Z})\bigotimes \mathbb{Z}/n\mathbb{Z}$ as $\mathbb{Z}$-module.

I think if X is connected space, $Tor^1(X,\mathbb{Z})$ is trivial, so thanks to universal coefficient theorem, above two module are isom.

Build Parameterized Spiral Geometry

Posted: 27 Jun 2021 10:03 PM PDT

I want to Build Parameterized Spiral Geometry.

To offset a constant distant d on a Archimedean spiral. Like the picture shows offset d The original parametric form of Archimedean Spiral is: $$x(t) = a\cdot t\cdot \cos(t)$$ $$y(t) = a\cdot t\cdot \sin(t)$$

Then my target new equation which has same period sine waves on Archimedean spiral is : $$x(t) = (at + \sin(bt^2)) \cos(t)$$ $$y(t) = (at + \sin(bt^2)) \sin(t)$$

I successfully use the equation in the picture above draw a Parameterized Spiral on Desmos: https://www.desmos.com/calculator/jerewqqzuc non_same_period

My goal is to make a d offset on the spiral which has same period sine wave on it and control the amplitude of the sine wave. Please help me !Thanks! similar example: Equation of sine wave around a spiral?

Name of these lemmas in set theory

Posted: 27 Jun 2021 10:12 PM PDT

Lemma 1.2 If $S$ is countable and $S'\subset S$, then $S'$ is also countable

Lemma 1.3 If $S'\subset S$ and $S'$ is uncountable, then so is $S$.

I was wondering if there was a name for the logic/proof that these two lemmas act upon. That if a component lacks a quality, it disqualifies the greater set containing that component from having that quality. And the other, that should a component have a quality (in this case, uncountability), then so does its greater set containing it.

Is there a name for this reasoning?

cratylusMy question is derived from Plato's Cratylus, where Socrates essentially makes the argument that:

  1. There are true and false propositions.
  2. A proposition being true is contingent on the parts (by Socrates, called names, which loosely translates to words) of the proposition also being true
  3. That the part of a true proposition must also be true, and of a false proposition false.

I thought his rational was similar to the lemmas used for sets, how for the set to be countable (true), the parts (elements) must also be true.

When does $\sqrt{a b} = \sqrt{a} \sqrt{b}$?

Posted: 27 Jun 2021 10:29 PM PDT

So, my friend show me prove that $1=-1$ by using this way:

$$1=\sqrt{1}=\sqrt{(-1)\times(-1)}=\sqrt{-1}\times\sqrt{-1}=i\times i=i^2=-1$$

At first sight, I stated "No, $\sqrt{ab}=\sqrt{a}\times\sqrt{b}$ is valid only for $a,b\in\mathbb{R}$ and $a,b\geq0$"

But, I remember that $\sqrt{-4}=\sqrt{4}\times\sqrt{-1}=2i$ which is true (I guess).

Was my statement true? But, $\sqrt{ab}=\sqrt{a}\times\sqrt{b}$ is also valid if one of a or b is negative real number. Why is it not valid for a dan b both negative? If my statement was wrong, what is wrong with that prove?

four state Markov chain

Posted: 27 Jun 2021 10:02 PM PDT

If there are four states:A,B,C,D. Probability of moving to the left is b and prob of moving to the right is a. If starting at state B, what is probability of arriving at state D? The hit says to introduce the probability of moving from C and ended in state A.

Sorry, I forget to say that once arrives at A or D, the whole process end. So starting from B, it can not move to left if the destination is D.

When does $\sqrt{wz}=\sqrt{w}\sqrt{z}$?

Posted: 27 Jun 2021 09:49 PM PDT

There exists a unique function $\sqrt{*} : \mathbb{C} \rightarrow \mathbb{C}$ such that for all $r \in [0,\infty)$ and $\theta \in (-\pi,\pi]$ it holds that $$\sqrt{r\exp(i\theta)}=\sqrt{r}\exp(i\theta/2),$$

where $\sqrt{r}$ denotes the usual principal square root of a real number $r$.

Lets take this as our definition of the principal square root of a complex number. Thus $i=\sqrt{-1}.$

Now. We know that, for all positive real $w$ and $z$, it holds that $\sqrt{wz}=\sqrt{w}\sqrt{z}$. We also know that this fails for certain complex $w$ and $z$. Otherwise, we'd be allowed to argue as follows:

$$-1 = i\cdot i = \sqrt{-1} \cdot \sqrt{-1} = \sqrt{-1 \cdot -1} = \sqrt{1}=1$$

My question: for which complex $w$ and $z$ does it hold that $\sqrt{wz}=\sqrt{w}\sqrt{z}$?

No comments:

Post a Comment