Saturday, December 11, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Draw a Graph from a Given Degree Sequence

Posted: 11 Dec 2021 03:52 AM PST

I want to prove that this degree sequence $(5,5,5,2,2,2,1)$ isn't valid to draw a graph from it, the graph doesn't need to be simple. I am looking for a Theroem or a way to contradict the assumption that we can make a graph from it.

My solution was the following, for the given nodes:degrees => $(A:5; B:5; C:3; D:2; E:2; F:2; G:1)$

Graph

Note that the vertex $C$ is the one that makes the contradiction, since we should have another 2 extra edges, but we can't add them to the previous nodes.

So my question is: Is there any theorem which I can use to prove this contradiction? Because I feel like my solution isn't enough.

How to prove that the graphs of continuous mappings are homeomorphic to each other (set of subsets of topological space / homeomorphic)

Posted: 11 Dec 2021 03:52 AM PST

I need to prove that the graphs of all continuous mappings of the same space are homeomorphic to each other:

Picture with condition

What does the symbol ⩫ stand for in the context of "similar to a distribution"?

Posted: 11 Dec 2021 03:46 AM PST

I came across this symbol (⩫) in a set of notes about constructing a confidence interval for the mean difference $\mu_X - \mu_Y$ for two Normal distributions $X$ and $Y$ with unknown population means, unknown population variances and no assumption about the equality of said variances.

enter image description here

$t_v$ denotes is the t distribution with $v$ degrees of freedom.

This "result" was given to me without any explanation and my lecturer dismissed the meaning behind the symbol, stating that I can simply treat it as having the same meaning as $\sim$. Surely the meaning of the two notations must be slightly different?

If X is connected, then show that no two points with disjoint open neighborhoods can have a retract of X.

Posted: 11 Dec 2021 03:35 AM PST

Let $X$ be a connected space and $x_{0} , x_{1} \in X$ be two points of $X$ which have disjoint open neighbourhoods in $X$.

Show that $A$={ $x_{0}, x_{1}$ } can never be a retract of X.

We know that the Reverse image of any open set is open under the continuous mapping. But I need a proof process.

(Question source: Algebraic Topology A Primer -Satya Deo - Chapter 2 (Section 3) - Exercise 2)

Norm equivalence in a Banachspace

Posted: 11 Dec 2021 03:49 AM PST

Let $E$ be a Banachspace with two norm on it: $\|.\|_1, \|.\|_2$ such that $\forall x\in E: \|x\|_1 \le \|x\|_2$. Prove that the norms are equivalent.

I was working on some functional analysis exercises and couldn't solve this one. The exercise gave a hint to use Banach-Schauder open mapping theorem:

Theorem:(Open Mapping Theorem) Suppose $X$ and $Y$ are Banach spaces. If $T:X \rightarrow Y$ is a bounded surjective operator, then $T$ is an open map.

I don't see how I can use this hint for this exercise. Any help would be appreciated.

Lower bound for the "normalized scaling coefficient" of a matrix

Posted: 11 Dec 2021 03:30 AM PST

Let $A:X\rightarrow Y$ be a linear operator between real, finite dimensional vector spaces. Are there any known non-zero lower bounds for the expression, in case $A$ has some "structure"? $$ \inf_{x\neq 0}\frac{||Ax||_2^2}{||x||_2^2} \quad ? $$ If we don't assume any structure on $A$, obviously one could pick $A$ to be the zero matrix in which the infimum would be $0$, so being non-zero is some "structure" that $A$ should have. Perhaps also there are some dual results to the $\sup$-version of this expression, which is the operator norm?

Determining vector that has known dot product with another vector, with a max on the taxicab distance

Posted: 11 Dec 2021 03:30 AM PST

Is there a way to determine a vector $\vec{x}$ such that its dot product with a known vector is a certain value, when constricted that the taxicab distance is a certain value?

Find $\vec{x}$ such that $\vec{x}\cdot\vec{y} = a$
where $\vec{y}$ and $a$ are known, and $taxicab(\vec{x}) = b$


Another way to look at it:

Let $c_i \in N$ for $i \in [1, 2, ..., n]$

Let $total \in N$

Let $b \in N$

Find $x_i \in N$ for $i \in [1, 2, ..., n]$

Such that:

$$ \sum_{i=0}^{n}{x_i} = b \\ \sum_{i=0}^{n}{x_i \cdot c_i} = total $$


Lets say you are at a restaurant that sells different pretzels in boxes. Due to the difference in the kind of pretzel, all boxes are equally sized, but with a different amount of pretzels inside.

You want exactly $total$ pretzels, divided over $b$ boxes.

How many boxes of which kind of pretzel should you take?


Example:

$$ c_1 = 604 \\ c_2 = 4500 \\ c_3 = 8111 \\ c_4 = 58516 \\ c_5 = 213 \\ c_6 = 4343 \\ c_7 = 8398 \\ c_8 = 7029 \\ b= 20 \\ total = 268974 $$

Answer: $$ \begin{align} \sum_{i=0}^{n}{x_i} =&\space b \\ 2 + 3 + 1 + 3 + 1 + 2 + 4 + 4 =&\space 20 \\ \sum_{i=0}^{n}{x_i \cdot c_i} =&\space total \\ 2 \cdot 604 + 3 \cdot 4500 + 1 \cdot 8111 + 3 \cdot 58516 \quad & \\ + 1 \cdot 213 + 2 \cdot 4343 + 4 \cdot 8398 + 4 \cdot 7029 =&\space 268974 \end{align} \\ $$ Therefore $$ x_1 = 2, x_2 = 3, x_3 = 1, x_4 = 3, x_5 = 1, x_6 = 2, x_7 = 4, x_8 = 4 $$

Is a solution


Note: I found this solution using brute force. I'm looking for an analytical way.

$f$ is a square-integrable function, $ \|f(x+h)-f(x)\|_{L^2}=O(h^{1+\alpha}), h\rightarrow 0. $

Posted: 11 Dec 2021 03:28 AM PST

Suppose $f$ is a square-integrable function. There exists $\alpha>0$ satisfying $$ \|f(x+h)-f(x)\|_{L^2}=O(h^{1+\alpha}), h\rightarrow 0. $$ Prove that $f(x)$ equals a constant almost everywhere.

Let $F(x)=\int_0^x f(t)\mathrm{d}t$, I can prove that $$ F(x+h)-F(x)=o(h^{1/2}), h\rightarrow 0, $$ but it seems have no relation with the problem. I have no idea how to start.

Any help! Thanks.

Existence and Uniqueness of Nonlinear ODE

Posted: 11 Dec 2021 03:25 AM PST

What are the conditions on the non-linear functions $f$ and $g$, for the system $$\ddot{x}+(1+g(\dot{x},t))\dot{x}+f(x) = 0,$$ to have global solutions for all $t\geq 0$?

Prove that sequence converges

Posted: 11 Dec 2021 03:50 AM PST

I have the following sequence: $u_n = \frac{1}{9} \cdot (10 - 0.1^n)$ and I want to prove that it converges to $\frac{10}{9}$.

I know that I must show For an arbitrary $\epsilon > 0$ there is an $N > 0$ such that if $n > N$ then $|u_n - \frac{10}{9}| < \epsilon$

And I have tried this multiple times, however I always get a result where $N$ ends up being $< 0$ for some $\epsilon > 0$.

Can someone show how to correctly prove it?

Edit:

I get to $0.1^n < 9 \epsilon$ but after taking the log, I get $n > -log_{10}(9 \epsilon)$ which is negative for certain values for $\epsilon$

Applying the formalism of common knowledge to a simple example: Is "$Y=3$" common knowledge among the boy, the robot and the girl?

Posted: 11 Dec 2021 03:39 AM PST

I am trying to understand the formalism of common knowledge. The sense behind the concept is already well explained here, however I struggle with linking the formalism to a practical use case.

When reading the definitions it first appears straight forward: We have a set of states $S$, an event $E\subseteq S$ as a subset of these states and a partition $P_i=\{\{\ldots\},\{\ldots\},\ldots,\{\ldots\}\}$ of $S$ representing the knowledge of a decision maker $i$ in a state. Here I am not quite clear how I should practically understand the partition's elements (i.e. these subsets of $S$). The puzzle pieces seem to fall into place when reading on: In state $s\in S$, decision maker $i$ knows that one of the states in $P_i(s)$ occurs, but he doesn't know which one. Here $P_i(s)$ is the unique element in $P_i$ that contains $s$.

The knowledge function $K_i(e)=\{s\in S|P_i(s)\subset e\}$ is the set of states (a subset of the event $e$), where the decision maker knows that event $e$ occurs. In other literature, instead of the proper subset, the improper subset is also used: $K_i(e)=\{s\in S|P_i(s)\subseteq e\}$. The operator for the idea "everyone knows $e$" is defined by intersecting the knowledge of all decision makers $i$ as follows: $E(e)=\bigcap_{i}K_i(e)$. Iterating the function $E$ is understood as the well known function composition $E^1(e)=E(e)$ and $E^{n+1}=E(E^n(e))$. Finally the Common Knowledge function is given by:

$$C(e)=\bigcap_{n=1}^{\infty}E^n(e)=E^1(e)\cap E^2(e)\cap E^3(e)\ldots$$

So good so far, but when trying to applicate this formalism to a practicla use case, I'm missing the right impetus. I found a very simple and illustrative example in this YT video "Is It Common Knowledge?" (by James Miller):

enter image description here

I would really appreciate it if someone can help me how to use the (admittedly simple) formalism to describe this use case.

My rough ideas are:

  • I have to define the three "decision makers", the boy ($i=1$), the robot ($i=2$) and the girl ($i=3$). This is our starting point.
  • I have to define $S$ and the partitions $P_1,P_2,P_3$. Here I already have difficulties.
  • For our boy, robot and girl we have to define the knowledge functions $K_1,K_2,K_3$.
  • To describe that "$Y=3$" is not common knowledge, we have to intersect $K_1(e)\cap K_2(e)\cap K_3(e)$, use the iteration $K_1(K_2(K_3(e)))$ and come up with an result that is an empty set $\emptyset$.

I would be grateful if you could help me put the puzzle pieces together.

Time and Distance Problem : Circular Track

Posted: 11 Dec 2021 03:32 AM PST

$A$ and $B$ start running simultaneously on a circular track from point $O$ in the same direction. If the ratio of their speeds is $6 :1$ respectively, then how many times is $A$ ahead of $B$ by a quarter of the length of the track before they meet at $O$ for the first time?

Now I have found that $A$ and $B$ will meet at the starting point $O$ after $LCM(\frac{L}{6x},\frac{L}{x})=\frac{L}{x}$ hours where $L$ is the length of the track.
Now for the first time $A$ will be ahead of $B$ by a quarter of the length of the track will be $\frac{L}{20x}$ hour but how will I find the remaining number of times $A$ will be ahead of $B$ in total of $\frac{L}{x}$ hours. Please help !!!

Thanks in advance !!!

is $det(ABA)$ $= det(B)$?

Posted: 11 Dec 2021 03:49 AM PST

Square matrix A, B

A is inventible.

if

$det(ABA)=0$

it can't be also $det(B)=0$

is this right?

Because, only

$\begin{align}\det(ABA^{-1}) &= \det(B) \end{align}$

and $A \neq {A}^-1$

So then, is $\begin{align}\det(ABA) &= \det(B) \end{align}$ a false statement?

$L_p$ and measure convergence

Posted: 11 Dec 2021 03:45 AM PST

Consider the Lebesgue measure. I'm asked to study the $L_p$ and measure convergence of the following functions:

  1. $f_n(x)=\mathcal{X}_{[n+3,n+4]}(x)$ in $(0,\infty)$ and $n\in\mathbb{N}$

  2. $g_n(x)=n^3\mathcal{X}_{[0,\frac{1}{log(n+2)}]}(x)$ in $[0,1]$ and $n\in\mathbb{N}$


  1. Since $f_n(x)\longrightarrow 0$ a.e. (as $n\longrightarrow\infty$), we have to check: $$\int_{(0,\infty)} |f_n-0|^p dm=m([n+3,n+4])=1 $$ Therefore, $f_n$ doesn't converge to $0$ in $L_p$.

  2. Since $g_n(x)\longrightarrow 0$ a.e. (as $n\longrightarrow\infty$),now we have to compute: $$\int_{(0,\infty)} |g_n-0|^p dm=n^{3p}m([0,\frac{1}{log(n+2)}])=\frac{n^{3p}}{log(n+2)}\longrightarrow\infty\;\text{as}\;n\longrightarrow\infty $$ since $log(n)<<n^{3p}$.

Therefore, $g_n$ doesn't converge to $0$ in $L_p$.


But I don't know how to study the convergence in measure. Any hint? Thanks!

Using value outside galois field and do calculations inside

Posted: 11 Dec 2021 03:48 AM PST

currently, I'm working Shamir Secret Sharing algorithm and for not so big numbers (long passwords more than 7 chars) my calculations are broken because of overflows.

The case is this: I have a big number: suppose 49584309583 and if I use it in calculations eventually be lost of precision so I've heard that calculations can be done in Galois field for example in GF(251^1)

So the question is: can I get my number to be converted back from GF to outside GF?

Measurability of the integral of Brownian motion

Posted: 11 Dec 2021 03:33 AM PST

Hi once again a measurability question about Brownian motion. I'm not that familiar in how to prove that a function is measurable rigorously, so I'm stuck at the following exercise: Show that for $T>0$ the mapping $$\omega \rightarrow X_T(\omega) = \int_{0}^{T} B_t(\omega) dt$$ is measurable. The Brownian motion is therefore assumed to be jointly measurable with continuous paths. How can I prove this? I know the definition of measurability that the preimage of measurable sets are measureable. But what are here the measurable sets in the image space. And how to proceed from there? Any help is really appreciated.

About the minimum of the Gamma function on $(0,1)$

Posted: 11 Dec 2021 03:46 AM PST

Problem :

Denotes by $x_{min}=k$ the minimum of the Gamma function $x!$ on $(0,1)$ then prove or disprove that :

$$\left(e^{-\frac{k^{2}}{C^2}}\right)!>k$$

Where $C=-1+\frac{1}{\ln(3)}+\ln(3)$


I cannot find an attempt because the problem of the minimum of $x!$ is really hard . In consequence I have tried numerical tools and it seems true for the first five digit ($0.8856$)



Question :

How to (dis)prove it ? Have you seen this equation before in the litterature ?

Thanks !

Expected number of tosses until showing differently from first result

Posted: 11 Dec 2021 03:48 AM PST

What is the expected number of coin tosses (heads comes with probability 0.5) until we get a result which is different from the first toss?

My first intuition is 3: we need to toss once, and then we have a geometric distribution with expectation of 2.

But, there are 2 possible scenarios: one scenario is when the first toss is heads and the other scenario is when the first toss is tails. So, this might suggest that the expectation is less than 3.... A bit confusing...

PMF I got: enter image description here

enter image description here

Calculus : integration of inverse function

Posted: 11 Dec 2021 03:49 AM PST

enter image description here

In this problem $\frac pq = \frac {139}4 $, $p+q=143$

But what if, there is another case that satisfies conditions but function f(x) is not differentiable at somewhere or all in [1,8]?

( this problem is Korea's KSAT number 30 problem for the selection area of calculus.)

How to compactly write a simple set for 2 variables when second variable may have same properties as first or it may optionally be positive infinity?

Posted: 11 Dec 2021 03:47 AM PST

It's been about 40 years since learning higher math like partial differential equation and linear algebra including simple tensors but never needed much of it. Now I'm working on a Wolfram Mathematica programming project that also requires describing solution sets in proper math notation. Below is what I have come up with and I am not even sure if I used all math symbols correctly.

For example I'm not certain I wrote And correctly (in programming && means And) or Or correctly (in programming || means Or) which is how I would have written these two without asking for help although symbols are correctly written in Latex or if I'm using any math symbols (probably learned before you were born) that are now considered deprecated.

$$\{n,h \in \mathbb{Z} \geq 0 \lor n \in \mathbb{Z} \geq 0 \land h \to +\infty\}$$

Please help this old geezer out...

(1.) I need most compact form (less is more) for this set making certain all math symbols are written correctly by most current standards. If set is written correctly and cannot be compacted more without losing meaning please comment below validating this. Thank you for sharing your expertise. Item 2 may be ignored but I leave it as a reference to myself.

(2.) Why isn't my fat Z (written as $\Z$) for the set of all Integers rendering correctly? I used a Latex source for that. Should I have instead referenced a Tex source? Nevermind I figured out how to write $\mathbb{Z}$ and found the greatest tutorial right here in SE Math Meta MathJax

Taylor expansion of $\sin \pi z$ at $z = -1$.

Posted: 11 Dec 2021 03:32 AM PST

Taylor expansion of $\sin \pi z$ at $z = -1$ is $$\sin\pi z = -\sin(\pi(z+1)) = -\sum_{n=0}^\infty \frac{(-1)^n\pi^{2n+1}}{(2n+1)!}(z+1)^{2n+1}$$ so that $$\sin\pi z = \sum_{n=0}^\infty \frac{(-1)^{n+1}\pi^{2n+1}}{(2n+1)!}(z+1)^{2n+1}. \tag{$\dagger$}$$ But if I try this \begin{align} \sin\pi z & = \sum_{n=0}^\infty \frac{(-1)^n\pi^{2n+1}}{(2n+1)!}z^{2n+1} \\& = \sum_{n=0}^\infty \frac{(-1)^n\pi^{2n+1}}{(2n+1)!}(z+1-1)^{2n+1} \\ & = \sum_{n=0}^\infty \frac{(-1)^n\pi^{2n+1}}{(2n+1)!}\left(\sum_{k=0}^{2n+1}\binom{2n+1}{k}(z+1)^k(-1)^{2n-k+1}\right).\tag{$\dagger^*$} \end{align} In this case, how can I reduce $(\dagger^*)$ as the above form $(\dagger)$? Just direct calculation?

Find the perimeter of a polygon $ABCDEF$

Posted: 11 Dec 2021 03:36 AM PST

A circle, with a radius of $12$ cm and with the center coinciding with the center of an equilateral triangle with a side of $36$ cm, intersects the sides of the triangle at points $A, B, C, D, E$ and $F$. Find the perimeter of the polygon $ABCDEF$.

Image of the question:

enter image description here

I solved it as follows: I proved that this polygon is a regular hexagon, found its side, it is $12$ and then the perimeter is $6⋅12=72$. I was confused by the simplicity of the task. Did I solve the problem correctly?

When is the projection of an ellipsoid a circle?

Posted: 11 Dec 2021 03:35 AM PST

Consider an ellipsoid in the three dimensional Euclidean space, say $$\frac{x^2}{a^2}+\frac{y^2}{b^2} + \frac{z^2}{c^2} =1 $$ where $a$, $b$, $c$ are positive reals. I'm counting the number of planes through the origin so that the image is a perfect circle. There may be divergent cases if we consider the case that some of $a$, $b$, $c$ are coincide. But at first, let us focus on the case that $a$, $b$, $c$ are all different, say $a>b>c$.

I guess the answer would be $4$. I have made many efforts but failed. What I have observed is the that at least two such planes exists and the radius of the circle is $b$. Just consider rotating plane possesses $y$ axis and apply intermediate value theorem.

Causion! We are concerning projection, not intersection.

PS. Now I guess there are infinitely many...

Stone–Čech compactification of a Tychonoff space can be taken as closure of diagonal mapping image in Tychonoff cube

Posted: 11 Dec 2021 03:27 AM PST

Let $X$ - Tychonoff topological space. Show that Stone–Čech compactification of $X$ can be obtained by taking the closure of the image of the space $X$ under the mapping $\Delta_{f\in C(X, I)}f$ in the space $\prod_{f\in C(X, I)}I_f$, where

  • $C(X, I)$ is the family af all continous mappings from $X$ to $I=[0, 1],$

  • $I_f = I = [0, 1]$ for $f\in C(X, I),$

  • $\prod_{f\in C(X, I)}I_f$ is Tychonoff cube i.e. Cartesian product of unit intervals $I$ by indexing set $C(X, I)$ with product topology,

  • $\Delta_{f\in C(X, I)}f$ is diagonal mapping i.e

$$\Delta_{f\in C(X, I)}f: X\to \prod_{f\in C(X, I)}I_f \;$$ $$\Delta_{f\in C(X, I)}f: x\mapsto \{f(x)\}_{f\in C(X, I)}\in\prod_{f\in C(X, I)}I_f \; .$$

  • Stone–Čech compactification of space $X$ is the largest element in partial order of compactifications of space $X$. This partial order can be defined the following way. Let $Y$, $Z$ - compactifications of $X$ with homeomorphism embeddings $c_Y$ and $c_Z$ respectively. By definition $Z \leq Y$ if there exists continuous mapping $f: Y \to Z$ such that $fc_Y = c_Z$.

I will be glad for any idea, comment, hint or advice.

Can we figure out a distribution function for how long does it take for the following system to terminate? [closed]

Posted: 11 Dec 2021 03:23 AM PST

Suppose we simulate the following system:

In the beginning there are $N$ robots and a large reserve of fuel: $T$ units of fuel in total. Each robot takes some amount of fuel to start out. They consume fuel at a constant rate: 1 unit / second - regardless of what they are doing. If a robot runs out of fuel, it loses (dies in a sense) and it doesn't affect the system anymore.

The robots can access the reserve and they can take some amount of fuel for themselves. The amount of elapsed time between refuels follows an exponential distribution with rate $\lambda$. The amount of fuel the robots take at any given occasion follows a uniform distribution ~ $U(0, F)$.

If there is only 1 robot remaining then it gets all the remaining fuel. For the sake of simplicity, let's assume that taking/transfering fuel happens pretty much instantly. Let's also assume that the robots can store and use any amount of fuel.

No additional fuel or robots are added during the simulation, no parameters are changed, only the initial values are given: $T, N, \lambda, F$.

Is it possible to state anything about the distribution of the termination time in terms of these parameters?


The main motivation was to figure out for how long would it take to live up some sort of resource that's nonrenewable and is necessary for life.

I've tried to define a function for the remaining fuel based on the time points when robots die. Let $d_k$ denote the times of deaths where $k \in [1;n-1]$ and $0 \le d_1 \le d_2 \le ... \le d_{n-1}$. Following $r(t)$ shows the remaining fuel in the system at time point $t$:

$r(t)=\begin{cases}T-Nt & 0 \le t \le d_1\\ T-(N-1)t - d_1 & d_1 \lt t \le d_2 \\ ...\\ T-2t - d_1 - ... - d_{n-2} & d_{n-2} \lt t \le d_{n-1} \\ T-t - d_1 - ... - d_{n-2} - d_{n-1} & d_{n-1} \lt t \le T - d_1 - ... - d_{n-1} ​\\ \end{cases}$

However, this didn't really bring me closer to anything.

Solution to Nonlinear ODE $s''( t) s( t) =( s'( t))^{2} +B( s( t))^{2} s'( t) -g\cdot s( t) s'( t)$

Posted: 11 Dec 2021 03:49 AM PST

I've solved linear ODEs before. This however is something completely new to me. I want to solve it without using approximations or anything.

$s''( t) s( t) =( s'( t))^{2} +B( s( t))^{2} s'( t) -g\cdot s( t) s'( t)$

These are the equations I started with

$ \begin{array}{l} s'( t) =-Bs( t) i( t)\\ r'( t) =g\cdot i( t)\\ i'( t) =i( t)( Bs( t) -g) \end{array}$

The reason I'm solving this particular equation is because I want to solve for all of the functions $(i(t),r(t),s(t))$ from above.

I just figured $s(t)$ might be the place to start. I really don't have any clue where to start here. Any help would be awesome! Thanks!

--edit-- I made a mistake above. In order to fix it I changed $i'( t) =i( t)( Bs( t) -1)$ to this $i'( t) =i( t)( Bs( t) -g)$. My bad...

---Edit---

I might have a step towards the answer.

$ \begin{array}{l} v( s) =s'( t)\\ \Longrightarrow \\ s''( t) =\frac{dv( s)}{dt} =\frac{dv}{ds}\frac{ds}{dt} =v'( s) \cdot \frac{ds}{dt} =v'( s) \cdot v( s)\\ \Longrightarrow \\ v'( s) \cdot v( s) s=v( t)^{2} +Bs^{2} v( t) -g\cdot s\cdot v( t)\\ \Longrightarrow \\ v'( s) \cdot s=v( t) +Bs^{2} -g\cdot s \end{array}$

Wolfram Alpha says that the solution to this differential equation is

$v(s)=Bs^2+c_1s-gsln(s)$

So that means that

$s'(t)=Bs(t)^2+c_1s(t)-gs(t)\ln(s(t))$

This is certainly better than before

Valuations with the same valuation ring

Posted: 11 Dec 2021 03:20 AM PST

Given a field $k$, a valuation map $v$ is a surjective group homomorphism $k^\times\twoheadrightarrow H$, where $H$ is an ordered group (called the valuation group), such that $v(a+b)\ge \operatorname{min}(v(a),v(b))$. I must prove that if I have another valuation $v':k\twoheadrightarrow H'$, with the property that $v(a)\ge 0\iff v'(a)\ge 0$, then there is an order-preserving isomorphism $f:H\to H'$ such that $v'=f\circ v$.

Firstly, observe that, with $a,b\in k$, $v(a)\ge v(b)$ if and only if $v'(a)\ge v'(b)$: $$v(a)\ge v(b)\iff v(ab^{-1})\ge 0\iff v'(ab^{-1})\ge 0 \iff v'(a)\ge v'(b).$$

From $v'=f\circ v$ we see that on an element $h\in H$ (using that $h=v(a_h)$ for some $a_h\in k$) we must set $f(h):=v'(a_h)$. This shows that if such $f$ is a preserving order isomorphism, it will be the unique one. The observation above guarantees that $f(h)$ doesn't depend on the choice of $a_h$, and that $f$ is order-preserving.

If $h,l\in H$, in order to prove that $f$ is an homomorphism: $$f(h+l)=f(v(a_{h})+v(a_{l}))=f(v(a_ha_l))=v'(a_ha_l)=v'(a_{h})+v'(a_{l})=f(h)+f(l).$$ Plus, $f$ is bijective because the map $f':H'\to H$ defined $h'\mapsto v(a_{h'})$, for $h'\in H'$ and $a_{h'}$ defined analogously to $a_h$, is the inverse of $f$.

Does this outline of the proof make sense? I'm a bit suspicious because I don't find where one should use that $v(a+b)\ge \operatorname{min}(v(a),v(b))$; did I use this hypothesis without seeing it?

If $A$ is a $5 \times 5$ complex matrix with $A^4=A^2 \neq A$, then what are its possible minimal polynomials?

Posted: 11 Dec 2021 03:43 AM PST

For this I have that $A^4-A^2=0$ so any possible minimal polynomial must divide $t^4-t^2=t^2(t-1)(t+1)$. Using the given condition I've been able to check that e.g. $A=-I$ works for $t+1$ so $t+1$ is a possible minimal polynomial, but $t-1$ is not possible since $A=I \implies A=A^2$.

Similarly for $(t-1)(t+1)=t^2-1$ we can find a matrix $A$ satisfying $A^2=I$ but $A \neq -I$ so this is also possible.

But there are many other possible polynomials including $t^2+t$, $t^3-t$, $t^2$, $t^3+t^2$, $t^3-t^2$ and $t^4-t^2$ that also divide $t^4-t^2$ and it seems either quite hard or very tedious to construct a $5 \times 5$ matrix satisfying one of these polynomials and checking that it doesn't satisfy any of the lower degree ones (unless it's a simple case like $t^2-t=0 \implies A^2=A$ so that one doesn't work). How do I find which of these are possible minimal polynomials?

Seeking elegant proof why 0 divided by 0 does not equal 1

Posted: 11 Dec 2021 03:42 AM PST

Several years ago I was bored and so for amusement I wrote out a proof that $\dfrac00$ does not equal $1$. I began by assuming that $\dfrac00$ does equal $1$ and then was eventually able to deduce that, based upon my assumption (which as we know was false) $0=1$. As this is clearly false and if all the steps in my proof were logically valid, the conclusion then is that my only assumption (that $\dfrac00=1$) must be false. Unfortunately, I can no longer recall the steps I used to arrive at the contradiction. If anyone could help me out I would appreciate it.

tensor product of sheaves commutes with inverse image

Posted: 11 Dec 2021 03:44 AM PST

Let $f : X \to Y$ be a morphism of ringed spaces and $\mathcal{M}$, $\mathcal{N}$ sheaves of $\mathcal{O}_Y$-modules. Then one has a canonical isomorphism $f^*(\mathcal{M} \otimes_{\mathcal{O}_Y} \mathcal{N}) \cong f^*\mathcal{M} \otimes_{\mathcal{O}_X} f^*\mathcal{N}$, but I cannot find a proof in any of the standard references. The problem is that the definitions of the functors $f^*$ and $\otimes$ are so cumbersome that I cannot even write down a map between these two sheaves. Surely there is a nice way to do this: to give you an idea of what I mean by "nice," I am the type of person who likes to define such functors as adjoints to some less complicated functor, prove that they exist, and then forget the construction.

No comments:

Post a Comment