Tuesday, April 6, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Section of Projective module has finitely many nonzero.

Posted: 06 Apr 2021 08:33 PM PDT

In Matsumura's commutative algebra book, Theorem 11.3 shows that if $R$ is an integral domain, $K$ is its field of fraction, and $I$ is a fractional ideal of $R$ and projective as $R$-module, then $I$ is invertible. The below is a part of the proof.

Every $R$ -linear map from $I$ to $R$ is given by multiplication by some element of $K$. If we let $\varphi: F \longrightarrow I$ be a surjective map from a free module $F=\oplus R e_{i},$ by assumption there exists a splitting $\psi: I \longrightarrow F$ such that $\varphi \psi=1$. Write $\psi(x)=\sum \lambda_{i}(x) e_{i}$ for $x \in I ;$ then by what we have said, each $\lambda_{i}$ determines a $b_{i} \in K$ such that $\lambda_{i}(x)=b_{i} x,$ and since for each $x$ that are only finitely many $i$ such that $\lambda_{i}(x) \neq 0,$ we have $b_{i}=0$ for all but finitely many $i$.

My question is, why there are finitely many nonzero $\lambda_{i}$? Since we do not assume that $I$ is finitely generated, it seems that $\lambda_{i}$ may be nonzero for infinitely many $i$. Could you explain how the author assumes finiteness?

For every number x∈[0,1]there exist a number y∈[0,1] such tha

Posted: 06 Apr 2021 08:31 PM PDT

Show that for every number x∈[0,1]there exist a number y∈[0,1] such that x≤y. LOGICAL FORM: ∀x (P(x)) ∃y

Proof: Let x is be a real number, such that x≥0 where x∈[0,1]. Assume that x is a function of f, means f(x) = y then f[0,1] = [0,1]. This implies that for every value of x there exist a number y such that 0 ≤ x,y ≤ 1 implies x≤y.

(Is my proving right or lacking? or just incorrect?). Can I see what is your proof with that statement?

Proof of $\lim_{x\to\infty} \frac{a^x}{x!}=0$ when $a$ is a constant

Posted: 06 Apr 2021 08:26 PM PDT

I was working through the proof of the convergence of the Maclaurin series for certain functions, and I came across this limit, which I want to prove, but am not sure how to prove.

$\lim_{x\to\infty} \frac{a^x}{x!}=0$, $a>0, a\in N$

For this limit, techinically $x!$ should grow faster than $a^x$ for any value of $a$, but when $a$ is really big, then the limit becomes hard to prove.

Using L'Hospital is not really feasible, as an exponential function always has an exponential in its derivative and the factorial function doesn't have a derivative.

I then tried using the definition of limits: Limit is true when for all $\varepsilon>0$ there exists $m$ such that if $x>m$, then $∣f(x) ∣ = |\frac{a^x}{x!}|<\varepsilon$. But then I have no clue what value $m$ should be relative to $\varepsilon$. I'm not sure if this is the way to solve the limit.

Confused about marginal density

Posted: 06 Apr 2021 08:25 PM PDT

My book says the following

If you have two continuous random variables $X$ and $Y$ in a joint pdf $f(x,y)$ then

$f(y)$ = $\int_{-\infty}^\infty f(x,y)dx$

$f(x)$ = $\int_{-\infty}^\infty f(x,y)dy$

My question is, is this by definition or is there a proof or theorem that gives this result. I wish to know WHY this is true.

Convergent Infinite Product in Proof of Partitions Identity

Posted: 06 Apr 2021 08:25 PM PDT

I am working through The Theory of Partitions by George Andrews (I have the first paperback edition, published in 1998).

Theorem 1.1 is a standard result that writes the generating function of a partition as an infinite product, e.g.

$$ \sum_{n \geq 0} p(n) q^n = \prod_{n \geq 0} \frac{1}{1 - q^n}. $$

At the end of the proof there is a discussion of convergence on page 5 of my copy.

The claim is made that if $0 < q < 1$ then $$ \prod_{i = 1}^\infty \frac{1}{1 - q^i} < \infty $$

Andrews seems to think this statement is obvious, but I don't see the justification. Could someone please point me in the right direction?

Ring of Formal Laurent Series which are Dedekind domains

Posted: 06 Apr 2021 08:20 PM PDT

Let $R$ be an integral domain and $R((x))$ be the ring of formal Laurent series over $R$. (The answer to this question has a good explanation for our ring.)

Is it true that $R$ is a Dedekind domain if and only if $R((x))$ is a Dedekind domain? If not, is there any better necessary and sufficient condition for $R((x))$ to be a Dedekind domain?

Note that in case of $R[x]$, $R[x]$ is a Dedekind domain if and only if $R$ is a field. (We can use a similar proof to this one.) The same goes for $R[[x]]$ with a similar proof, and for $R(x)$ since $R(x)=S^{-1}R[x]$ where $S=\{1,x,x^2,\ldots\}$. Unfortunately, the same thing doesn't happen in $R((x))$, since if $R$ is a field, then $R((x))$ is a field also. (The proof is also available in that earlier question whose link I've put above.) If we weaken the condition of $R$ to be a Euclidean domain, then $R((x))$ will also be a Euclidean domain. (This has been discussed here.) Hence, of course $R$ being a field is not a necessary condition to make $R((x))$ a Dedekind domain. Thus, a question sparks in my head:

What is a necessary and sufficient condition for $R$ to make $R((x))$ a Dedekind domain?

Thank you very much in advance.

Shafarevich Basic Algebraic Geometry Exercise 2.1.8 regarding the sum of tangent spaces

Posted: 06 Apr 2021 08:20 PM PDT

Shafarevich Basic Algebraic Geometry exercise 2.1.8 asks:

Prove that if $X = X_1\cup X_2$ and $x\in X_1\cap X_2$ then the tangent spaces $\Theta_{X,x}$, $\Theta_{X_1,x}$, $\Theta_{X_2,x}$ satisfy $$\Theta_{X_1,x} + \Theta_{X_1,x} \subset \Theta_{X,x}.$$

I worked out some examples but haven't yet been able to come up with a good approach to this problem. Could I get a hint to point me in a good direction?

When $PGL(n,q)$ is simple

Posted: 06 Apr 2021 08:20 PM PDT

I need to find all pairs $(n,q)$ with $n \geq 2$ for which $PGL(n,q)$ is simple.

We proved in the course that $PSL(n,q)$ is simple provided that $n \geq 2$ and $(n,q) \ne (2,2)$ or $(2,3)$. I also have read in one of the posts that $PGL(n,\mathbb{F})=PSL(n,\mathbb{F})$ if and only if every element of $\mathbb{F}$ has an nth root in $\mathbb{F}$.($\mathbb{F}$ is finite field)

So, can we say these are all the cases when $PGL(n,q)$ is simple? I feel this is not complete argument!

Evaluating the Tensor Product of Smooth Tensor Fields at a Point

Posted: 06 Apr 2021 08:19 PM PDT

I'm reading Lee's Introduction to Smooth Manifolds, and in Chapter 12 I've hit a minor but annoying wrinkle regarding tensor fields. Proposition 12.25(a) states that for any two covariant tensor fields $A$ and $B$ and smooth map $F : M \to N$,

$F^* (A \otimes B) = F^* A \otimes F^* B$

where $F^* A$ is the pullback of $A$ by $F$. Proving the proposition is straightforward enough, but in doing so I make use of $(A \otimes B)_p = A_p \otimes B_p$, which seems intuitively obvious and is actually the definition of the tensor product of tensor fields in another text. But unfortunately, Lee never explicitly states this, and it seemingly can't be derived from any other information in the chapter. In fact, after defining a covariant tensor field on a smooth manifold $M$ as a section of the tensor bundle $$ T^k T^* M = \coprod_{p \in M} T^k(T^*_p M), $$ (where $T^k(T^*_p M)$ is the vector space of $k$-tensors on $T_p M$) Lee simply states that any such section $A$ can be written (using the summation convention) as $$ A = A_{i_1 \dots i_k} dx^{i_1} \otimes \dots \otimes dx^{i_k}, $$ without any mention of how one is to evaluate the given tensor product of covector fields at a point.

Is there any way I can arrive at the desired result using strictly what is presented by Lee, or am I being overly pedantic and should just move on?

Is there a way to "curve" a vector space and still define an inner product or something similar?

Posted: 06 Apr 2021 08:18 PM PDT

I want to define an inner product (vector) space that is also a Riemannian manifold, but I do not know how the inner product would be defined. For my uses, the metric tensor won't work because that is the inner product on the tangent spaces. What are the most natural ways to define an inner product between points on this space?

Show that $\alpha$ is path homotopic to an injective $\beta:I\to S^2$

Posted: 06 Apr 2021 08:17 PM PDT

Let $\alpha:I\to S^2$ be a path. Suppose $\alpha$ is not surjective and $\alpha(0)\neq\alpha(1)$. Show that $\alpha$ is path homotopic to an injective $\beta:I\to S^2$

my idea is to note that $S^2\setminus\{x_0\}$ is homotopic to a plane where $x_0$ is a point in $S^2$ and is not in the image of $\alpha$. I stop at this point.

Analytic Proof to Functional Equation [Existence]

Posted: 06 Apr 2021 08:17 PM PDT

Suppose 𝑓: ℝ→ℝ. Then prove that no solutions exist to the functional equation $f(f(x)) + xf(x) = 1$.

I've been trying to find a way of proving this by differentiating both sides, and then showing that the differential equation has no solutions. But I'm not sure what tools from Analysis I would need to show this.

Algebraically, this problem is rather trivial. Testing $x = 0$, $x = f(0)$, $x = 1$, and $x = f(1)$, gives enough information to derive $f(0)f(1) = 0$, and testing out $f(0) = 0$ and $f(1) = 0$ will eventually lead to a contradiction. Therefore, I am exclusively interested in a purely analytic existence proof, showing that there exists no family of functions 𝑓: ℝ→ℝ that satisfy $f'( f(x))f'(x) + f(x) + xf'(x) = 0$

how to convert or transform a norm square equation to a concave form?

Posted: 06 Apr 2021 08:12 PM PDT

so the objective function is similar to log2(|h * theta * g_1|^2/|h * theta * g_2|^2),where h is a nx1 complex row vector, theta is n x n complex matrix, g_1 and g_2 are n x 1 column vectors.

If I want to maximize this, in cvx, maximize a convex function is not allowed.

how can I transform it to an concave equivalent form?

Verification: "$A\subseteq\mathbb{R}$, $f: A\to \mathbb{R}$, then $f$ is cont. iff $f^{-1}(O)$ open in $A$, $\forall O \subseteq \mathbb{R}$ open"

Posted: 06 Apr 2021 08:30 PM PDT

Hope everything is going well for everyone. I am hoping to get some feedback here concerning the accuracy, cohesiveness, and superfluousness of my attempt at the following problem. Also, please criticize my proof writing as much as possible; I really value scrutiny.

Problem

For non-empty $A \subseteq \mathbb{R}$, let $f:A \to \mathbb{R}$ be a function. Proof that $f$ is continuous if and only if for every $O \subseteq \mathbb{R}$, the preimage $f^{-1}(O)$ is open relative to $A$.

Definitions

Open Relative to: If $A \subseteq B \subseteq \mathbb{R}$, then $A$ is called relatively open in $B$ if $\forall x \in A$, $\exists \delta > 0$ such that $\mathcal{B}_{\delta}(x) \cap B \subseteq A$.

Continuous: For non-empty $A \subseteq \mathbb{R}$, let $f:A \to \mathbb{R}$ be a function. $f$ is continuous at $x \in A$ if for every $\epsilon > 0$, there exists $\delta > 0$ such that $A \cap \mathcal{B}_{\delta}(x) \subseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$.

Attempt

$\text{ }$ ($\Longrightarrow$) Let $f : A \subseteq \mathbb{R} \to \mathbb{R}$ be continuous. We want to show that $f^{-1}(O)$ is open relative to $A$. $\text{ }$ Consider an arbitrary open set $O \subseteq \mathbb{R}$. For all $x' \in f^{-1}(O)$, we have that $f(x') \in O$, and since $O$ is open, this means that $\forall f(x') \in O$, $\exists \epsilon > 0$ such that $\mathcal{B}_{\epsilon}(f(x')) \subseteq O$. That $\mathcal{B}_{\epsilon}(f(x')) \subseteq O$ implies $f^{-1}(\mathcal{B}_{\epsilon}(f(x'))) \subseteq f^{-1}(O)$. We know that $f^{-1}(O) \subseteq A$, so we have $f^{-1}(\mathcal{B}_{\epsilon}(f(x'))) \subseteq f^{-1}(O) \subseteq A$. Since $f$ is continuous at $x \in A$ we have $\forall \epsilon > 0$, $\exists \delta > 0$ such that $\mathcal{B}_{\delta}(x) \cap A \subseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$. This means that $\forall x' \in f^{-1}(O)$, $\exists \delta > 0$ such that $$\mathcal{B}_{\delta}(x') \cap A \subseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x'))) \subseteq f^{-1}(O)$$ which provides us with the relationship $\mathcal{B}_{\delta}(x') \cap A \subseteq f^{-1}(O)$. Since $O \subseteq \mathbb{R}$ was arbitrary, we have that $f^{-1}(O)$ is open relative to $A$ for any $O \subseteq \mathbb{R}$.

$\text{ }$ ($\Longleftarrow$) Let $O \subseteq \mathbb{R}$ be an arbitrary open set and let $f^{-1}(O)$ be open in $A \subseteq \mathbb{R}$. We want to show that $f$ is continuous at $x \in A$. $\text{ }$ Assume for the sake of contradition that $f$ is not continuous at $x \in A$, i.e. $\exists \epsilon > 0$, such that $\forall \delta > 0$ we have $\mathcal{B}_{\delta}(x) \cap A \nsubseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$. $\text{ }$ Given that $f^{-1}(O)$ be open in $A$, we have that $\forall x \in f^{-1}(O)$, $\exists \delta > 0$ such that $\mathcal{B}_{\delta}(x) \cap A \subseteq f^{-1}(O)$. That $O$ is open implies that for each $f(x) \in O$, $\exists \epsilon > 0$ such that $\mathcal{B}_{\epsilon}(f(x)) \subseteq O$. Taking the preimage of both sides, we get that $f^{-1}(\mathcal{B}_{\epsilon}(f(x))) \subseteq f^{-1}(O)$. Since $f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$ is also open, we know that for all $x' \in f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$, $\exists \delta > 0$ such that $\mathcal{B}_{\delta}(x') \subseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$, which means $\mathcal{B}_{\delta}(x') \cap A \subseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$, a contradiction since we assumed that for all values of $\delta > 0$, $\mathcal{B}_{\delta}(x) \cap A \nsubseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$. Thus, $f$ is continuous at $x \in A$.

Questions

Is this true, should I phrase it alternatively, and is it really necessary to mention?: "We know that $f^{-1}(O) \subseteq A$, so we have $f^{-1}(\mathcal{B}_{\epsilon}(f(x'))) \subseteq f^{-1}(O) \subseteq A$."

Are there inaccuracies present in this statement: "...we know that for all $x' \in f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$, $\exists \delta > 0$ such that $\mathcal{B}_{\delta}(x') \subseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$, which means $\mathcal{B}_{\delta}(x') \cap A \subseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$, a contradiction since we assumed that for all values of $\delta > 0$, $\mathcal{B}_{\delta}(x) \cap A \nsubseteq f^{-1}(\mathcal{B}_{\epsilon}(f(x)))$."

Consistency vs. non-contradiction in the algeraic logic tradition

Posted: 06 Apr 2021 08:09 PM PDT

In his introduction to (Skolem 1970), Wang claims that Skolem's conflations of consistency (non-contradiction) and satisfiability are

"explained by the fact that Skolem is in the [algebraic] tradition of Boole, Schroder, Korselt, and Lowenheim."

(p. 22) Wang references an observation of Bernays:

"from [the algebraic point of view] ... satisfiability is the same as consistency since no other hindrance to satisfying conditions can exist besides contradiction."

I do not understand this remark, either from a historical perspective, or from a technical one. Historically, there is plenty of evidence of algebraic logicians using "consistent" interchangeably with "satisfiable" and "non-contradictory". But I can find no evidence from the works of the algebraists referenced above that this is any kind of conflation. Indeed, "non-contradictory" and "satisfiable" seem to be used in the same semantic sense, i.e., to mean (in modern terms) that it is possible to assign truth values to atomic components so that the whole formula comes out true. If a formula contains both an atomic component A and its negation ~A then this assignment is not possible -- the formula is unsatisfiable AND contradictory.

Since the algebraists were not interested in deductive systems based on syntactic rules, I cannot understand why there would be any reason for them to understand "non-contradictory" as equivalent with "satisfiable" if "contradictory" is meant in the sense of syntactically refutable. However, this is apparently how Wang intends the remark above, because he immediately adds that as a result of this conflation, Skolem replaces the assumption of satisfiability with the assumption that a (syntactic) derivation of a contradiction exists in an implicit formal system. I suggest that the derivation is syntactic simply because from this assumption Skolem is supposed to prove that the formula is contradictory in the other semantic sense, roughly, that one of its conjuncts has no assignment of truth values that makes it true.

Does anyone have better insight into this remark?

References:

WANG, HAO

[1970] A survey of Skolem's work in logic . In Skolem [1970], pp. 17–52.

How these 2 approximation equations are gained of euclild distances?

Posted: 06 Apr 2021 08:30 PM PDT

enter image description here

$$r\gg 0$$

$$r_{1}\approx r-\frac{d}{2}\cos(\theta)\tag{1}$$

$$r_{2}\approx r+\frac{d}{2}\cos(\theta)\tag{2}$$

How the above approximation equations are made?

The function $f$, $f''(x) + f'(x) - e^x f(x) =0$ cannot have a non-negative maximum unless $f\equiv 0$

Posted: 06 Apr 2021 08:10 PM PDT

Let $f: \mathbb{R} \rightarrow \mathbb{R}$ is a $C^2$ function, satisfying

\begin{align} f''(x) + f'(x) - e^x f(x) =0 \end{align} for $x \in \mathbb{R}$.

I want to show that $f$ cannot have a non-negative maximum unless $f\equiv 0$.

I have no idea at this moment. My first trial is plugging $f(x) = e^{\alpha x} g(x)$ and find $\alpha$ and solution of $g(x)$ but this was not good.

Could ”g(a)≠0” change into “lim g(x)≠0”

Posted: 06 Apr 2021 08:21 PM PDT

【If f and g are poliynomials, with g(a)≠0,then lim [f(x)/g(x)] =f(a)/g(a).】

Could "g(a)≠0" change into "lim g(x)≠0"?

enter image description here

Suppose a book with 100 pages is divided into 5 chapters of 20 pages each. The book contains 10 typos distributed randomly amongst the pages.

Posted: 06 Apr 2021 08:15 PM PDT

(a) What is the probability that Page# 89 has exactly 2 typos?

I know this is a poison distribution question, and the rate happens to be 0.1 typos/page, but I'm having trouble on how I would figure out specifically the probability for page 89 having 2 typos.

Finding all groups with no proper subgroups

Posted: 06 Apr 2021 08:32 PM PDT

I am trying to solve the following problem in Artin's algebra.

Describe all groups $G$ that contain no proper subgroups.

Here is my attempt.

If $G = \{e\}$, then it is clear $G$ has no proper subgroups, so suppose that $G \neq \{e\}$. Let $x \in G$ be a non-identity element, and consider $\langle x \rangle$, the cyclic group generated by $x$. Certainly $\langle x \rangle \leq G$, but if $G$ has no proper subgroup and $\langle x \rangle \neq \{e\}$ since $x \neq e$, we have $G = \langle x \rangle$. So $G$ must be cyclic. We prove that $G$ must be finite. Suppose $G$ were infinite. Then $\langle x \rangle \cong \mathbb{Z}$ by the map $\varphi: \mathbb{Z} \to \langle x \rangle$ sending $n \longmapsto x^n$. But $2\mathbb{Z}$ is a non-proper subgroup of $\mathbb{Z}$, and the image of a subgroup under a homomorphism is a subgroup of the codomain. In particular, $2\mathbb{Z}$ is the cyclic subgroup of $\mathbb{Z}$ generated by $2$, so $\varphi(2\mathbb{Z})$ is the cyclic subgroup of $G$ generated by $x^2$. But $x \not \in \langle x^2 \rangle$, so $\langle x^2 \rangle \neq G$. Furthermore, it is not the identity since $x$ has infinite order. So we have found a proper subgroup of $G$, a contradiction, so $G$ has finite order. Finally, we prove that $G$ must have prime order. Suppose the order of $G$ is not prime, i.e., $|G| = ab$ for $a,b > 1$. Let $G = \langle x \rangle$, so $|x| = |\langle x \rangle| = ab$. Then $\left(x^a\right)^b = e$, so $\langle x^a \rangle$ is a cyclic subgroup of $G$ of order $b$; since $a,b > 1$, this is a proper subgroup, so we have a contradiction. Hence, $|G| = p$, where $p$ is prime.

Finally, we must show that all prime cyclic groups contain no proper subgroups. If $|G|$ is a cyclic group of prime order $p$, then any subgroup $H \leq G$ must have order dividing the order of $G$ by Lagrange's theorem. But the only divisors of $p$ in $\mathbb{N}$ are $1$ and $p$, i.e., $\{e\}$ and $G$. So there are no proper subgroups of $G$, as desired.

How does this look?

How is the given determinant expanded into the two below?

Posted: 06 Apr 2021 08:14 PM PDT

I am confused on how they expand the main determinant into the sum of the two below it. My first idea is that they use cofactor expansion with the columns, which is how they get the a0 coefficient in front of the second term in the sum, however neither determinant in the sum would be the resulting determinant when eliminating the given row and column. Furthermore, I do not see how the determinant on the right part of the sum turns out to be $(-1)^{k-1}$ in the line below it. Any pointers help! 1

Applying the dbar operator to differentiate a complex function

Posted: 06 Apr 2021 08:11 PM PDT

I'm trying to differentiate this complex function $$ f(z) = (|z|-1)^2 $$ in order to determine where it satisfies the Cauchy-Riemann equation $ \frac{\partial f}{\partial \bar{z}} = 0$.

So I first differentiate with respect to z. (When I calculate it, I differentiate "as if z is real" -- is that correct?) $$ \frac{\partial f}{\partial z} = 2 (\lvert z\rvert -1) \frac{z}{\lvert z \rvert} $$

And then according to this, I can calculate $\frac{\partial f}{\partial \bar{z}}$ by taking the complex conjugate of $\frac{\partial f}{\partial z}$: $$ \frac{\partial f}{\partial \bar{z}} = 2 (\lvert z\rvert -1) \frac{\bar{z}}{\lvert z \rvert} $$

However, the correct answer is supposed to be $$ \frac{\partial f}{\partial \bar{z}} = 2 (\lvert z\rvert -1) \frac{z}{ \bar{z}} $$

Where am I going wrong? Any advice would be much appreciated.

Subtraction - visualization

Posted: 06 Apr 2021 08:25 PM PDT

Subtraction is perhaps best viewed as removing objects from a set. For instance, if I have \$10, and I give away \$5, I have \$5. I can visualize removing \$5 from the original \$10. But, for instance, in the following expression: $$ \triangle population_{t}=births_{t}-deaths_{t} $$

the change in population from $t-1$ to $t$, is essentially the difference between new births and deaths. How is subtraction best viewed in this case? We aren't "removing" anyone from the set of births, right?

Spectral decomposition of Rodrigues' rotation formula

Posted: 06 Apr 2021 08:31 PM PDT

I am supposed to rewrite Rodrigues' rotation formula

$R(v)=v\cos \phi+k(k\cdot v)(1-\cos\phi)+(k\times v)\sin\phi$

in the form of spectral decomposition. I can figure out that the eigenvalues are 1, $e^{i\phi}$, $e^{-i\phi}$ and the eigenvector belonging to 1 is k, so we have

$R=kk^T+e^{i\phi}v_+v_+^T+e^{-i\phi}v_-v_-^T$ ,

but I'm not sure how to find the other eigenvectors $v_\pm$ without explicitly calculating $\mathrm{Ker}(R-\lambda I)$ from the matrix form. Is there a more elegant, painless way how to solve this? Or maybe I should choose a different approach altogether? Thanks for any help.

Prove that if $G$ is not 2-connected then there exists a pair of edges such that no cycle contains both edges

Posted: 06 Apr 2021 08:20 PM PDT

Full question: Prove that if $G$ is loopless, has no isolated vertices, has at least $2$ edges and is not $2$-connected then there exists a pair of edges $\{e_1, e_2\}$ in $G$ such that no cycle contains both $e_1$ and $e_2$.

So far I have that $G$ is not $2$-connected $\therefore$ must contain proper separations of order $0$ or order $1$. If $G$ contains only proper separations of order $1$ then $G$ is connected. If $G$ contains a proper separation of order $0$ then $G$ is disconnected. Then I have carried on from there. However the proof ends up quite verbose so I was wondering if there was a more simple way to do it.

$E[X|\mathcal{Q}] \leq \liminf_nE [X_n|\mathcal{Q}]$

Posted: 06 Apr 2021 08:31 PM PDT

If $(X_n)_n$ is a sequence of nonnegative random variable, $\mathcal{Q}$ is a sub $\sigma$-algebra, then the conditional Fatou lemma holds almost surley, $$E[\liminf_n X_n|\mathcal{Q}] \leq \liminf_n E[X_n|\mathcal{Q}].$$

Let's say that $(X_n)_n$ converges in probability to $X,$ is this true that, almost surely, $$E[X|\mathcal{Q}] \leq \liminf_nE [X_n|\mathcal{Q}].$$

Need help understanding conditional probability of continuous random variable conceptually

Posted: 06 Apr 2021 08:16 PM PDT

I need help understanding some stuff about conditional density. Here is an example from my book.

A bird lands in a grassy region described as follows: $x ≥ 0$, and $y ≥ 0$, and $x + y ≤ 10$. Let $X$ and $Y$ be the coordinates of the bird's landing. Assume that $X$ and $Y$ have the joint density: $f(x, y) = 1/50$ for $0 ≤ x$ and $0 ≤ y$ and $x + y ≤ 10$,

Given that the bird's landing $y$-coordinate is $2$, what is the probability that the $x$-coordinate is between $0$ and $5$?

$f(y) = \int_0^{10-y}\cfrac{1}{50}dx = \cfrac{10-y}{50}$ for $0\leq y \leq 10$

so $f(x|y) = \cfrac{f(x,y)}{f(y)} = \cfrac{1}{10-y}$ for $0 ≤ x$ and $0 ≤ y$ and $x + y ≤ 10$

The probability is $P(0 \leq X \leq 5| Y=2) = \int_0^5f(x|2) = \int_0^5\cfrac{1}{8}dx = 5/8$

I know the answer above is correct. I know how to calculate conditional probability. But my question is how come the given part $Y=2$ does NOT have a probability of $0$ since that's the probability that a continuous random variable equals an EXACT value I.E. $\int_2^2f(y)dy = 0$ making the answered undefined? Why is this not the case? What am I misunderstanding conceptually about the $Y=2$ in $P(0 \leq X \leq 5| Y=2)$ What am I misunderstanding conceptually about conditional density?

Also the conditional pdf will always have the same bounds of the joint pdf correct?

Trying to find the Minimal Polynomial given the Characteristic Polynomial of a Linear Map

Posted: 06 Apr 2021 08:32 PM PDT

Let $T:V \to V$ be a linear transformation (where $V$ is a vector space over a field $K$).

Suppose the Characteristic Polynomial of $T$ is $C_T(x) = {(x-\lambda_1)}^{r_1}{(x-\lambda_2)}^{r_2}\dots{(x-\lambda_k)}^{r_k}$ (where $\lambda_i$ are all distinct eigenvalues and $r_i$ are the algebraic multiplicities).

Then, will the Minimal Polynomial of $T$ be $M_T(x)=(x-\lambda_1)(x-\lambda_2)\dots(x-\lambda_k)$?

If it is not always true, can you provide a counter example?

Mittag–Leffler partial fraction decomposition for $\frac{\pi}{\sin(\pi z)}$

Posted: 06 Apr 2021 08:23 PM PDT

Question from the book Eberhard Freitag, Rolf Busam - Complex Analysis

I marked my question in the image. My problem is that I don't see that the two marked formula for the function $\frac{\pi}{\sin(\pi z)}$ are different.

Wieferich's criterion for Fermat's Last Theorem

Posted: 06 Apr 2021 08:29 PM PDT

I have found the following way to prove some(Wieferich's) criterion for Fermat's Last Theorem and am wondering what would be wrong. My point of doubt is calculation of the Fermat-quotients of $y,z$ being $-1$, since I found these rules on Wikipedia.

Also, should I split this in parts? I can imagine people don't feel like going through too much text.

Anyway, have fun!

Theorem:

Let:

$ \quad \quad \quad \quad p$ be an odd prime,

$ \quad \quad \quad \quad \gcd(x,y,z) = 1$,

$ \quad \quad \quad \quad xyz \not \equiv 0 \pmod p$

If:

$ \quad \quad \quad \quad x^p = y^p + z^p$,

then $p$ is Wieferich-prime.

Proof:

Consider the following congruence:

$ \quad \quad \quad \quad (x^n - y^n)/(x - y) \equiv nx^{n - 1} \pmod {x - y}$

which we can prove by induction on $n$ and in which we divide first.

Let $n = p$.

Since:

$ \quad \quad \quad \quad \gcd(x - y,(x^p - y^p)/(x - y))$

$ \quad \quad \quad \quad = \gcd(x - y,px^{p - 1})$

$ \quad \quad \quad \quad = \gcd(x - y,p)$

$ \quad \quad \quad \quad = \gcd(x - z,p)$

$ \quad \quad \quad \quad = 1$,

it follows that:

$ \quad \quad \quad \quad x - y = r^p$,

$ \quad \quad \quad \quad (x^p - y^p)/(x - y) = s^p$,

$ \quad \quad \quad \quad x - z = t^p$,

$ \quad \quad \quad \quad (x^p - z^p)/(x - z) = u^p$,

$ \quad \quad \quad \quad rs = z$,

$ \quad \quad \quad \quad tu = y$,

for some $r,s,t,u$ with $\gcd(r,s) = \gcd(t,u) = 1$.

The following also holds for $x - z,t,u$:

$ \quad \quad \quad \quad s \equiv 1 \pmod p \implies s^p \equiv 1 \pmod {p^2}$

Now let:

$ \quad \quad \quad \quad s^p = px^{p - 1} \pmod {x - y}$

$ \quad \quad \quad \quad \implies s^p = px^{p - 1} + ar^p \equiv 1 \pmod {p^2}$, for some $a$

$ \quad \quad \quad \quad \implies s \equiv ar \equiv 1 \pmod p \implies s^p \equiv (ar)^p \equiv 1 \pmod {p^2}$

$ \quad \quad \quad \quad \implies ar^p \equiv 1/a^{p - 1} \pmod {p^2}$

$ \quad \quad \quad \quad \implies s^p = px^{p - 1} + ar^p \equiv px^{p - 1} + 1/a^{p - 1} \equiv 1 \pmod {p^2}$

$ \quad \quad \quad \quad \implies px^{p - 1} \equiv 1 - 1/a^{p - 1} \pmod {p^2}$

$ \quad \quad \quad \quad \implies p(ax)^{p - 1} \equiv a^{p - 1} - 1 \pmod {p^2}$

$ \quad \quad \quad \quad \implies q_p(a) \equiv 1 \pmod p$,

where $q_p(a)$ denotes the Fermat-quotient for $a$ modulo $p$.

So it follows that:

$ \quad \quad \quad \quad q_p(r) \equiv q_p(1/a) \equiv -q_p(a) \equiv -1 \pmod p$

Because of $q_p(s) \equiv 0 \pmod p$:

$ \quad \quad \quad \quad q_p(z) \equiv q_p(rs) \equiv q_p(r) + q_p(s) \equiv -1 + 0 \equiv -1 \pmod p$

Since the same holds for $x - z,t,u$, we now have:

$ \quad \quad \quad \quad q_p(y) \equiv q_p(z) \equiv -1 \pmod p$

From which it follows that:

$ \quad \quad \quad \quad y^{p - 1} \equiv 1 - p \pmod {p^2}$

$ \quad \quad \quad \quad z^{p - 1} \equiv 1 - p \pmod {p^2}$

$ \quad \quad \quad \quad \implies y^p \equiv y(1 - p) \pmod { p^2 }$

$ \quad \quad \quad \quad \implies z^p \equiv z(1 - p) \pmod { p^2 }$

We also note:

$ \quad \quad \quad \quad y^p \equiv (tu)^p \equiv t^p \equiv x - z \pmod {p^2}$

$ \quad \quad \quad \quad z^p \equiv (rs)^p \equiv r^p \equiv x - y \pmod {p^2}$

So we can set:

$ \quad \quad \quad \quad y(1 - p) \equiv x - z \pmod {p^2}$

$ \quad \quad \quad \quad z(1 - p) \equiv x - y \pmod {p^2}$

$ \quad \quad \quad \quad \implies (x - z)/y \equiv (x - y)/z \implies z(x - z) \equiv y(x - y) \pmod {p^2}$

$ \quad \quad \quad \quad \implies y^2 - z^2 \equiv (y + z)(y - z) \equiv x(y - z) \pmod {p^2}$

So either:

$ \quad \quad \quad \quad \implies x \equiv y + z \pmod {p^2}$

or:

$ \quad \quad \quad \quad \implies p | y - z$

Suppose $x \equiv y + z \pmod {p^2}$:

$ \quad \quad \quad \quad \implies (x - y)^p \equiv r^{p^2} \equiv r^p \equiv x - y \implies z^p \equiv z \pmod {p^2}$, contradicting:

$ \quad \quad \quad \quad z^p \equiv z(1 - p) \pmod { p^2 }$

So now we know $y \equiv z \pmod {p}$

But then:

$ \quad \quad \quad \quad y^p \equiv z^p \pmod {p^2}$

$ \quad \quad \quad \quad \implies x^p \equiv y^p + z^p \equiv 2z^p \pmod {p^2}$

Also:

$ \quad \quad \quad \quad x \equiv y + z \implies x \equiv z + z \equiv 2z \pmod p$

$ \quad \quad \quad \quad \implies x^p \equiv (2z)^p \pmod {p^2}$

We conclude:

$ \quad \quad \quad \quad x^p \equiv (2z)^p \equiv 2z^p \pmod {p^2}$

$ \quad \quad \quad \quad \implies (2z)^p - 2z^p \equiv 0 \pmod {p^2}$

$ \quad \quad \quad \quad \implies z^p(2^p - 2) \equiv 0 \pmod {p^2}$,

from which we can see $p$ must be a Wieferich-prime.

No comments:

Post a Comment