Sunday, April 11, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


What function looks like this?

Posted: 11 Apr 2021 07:43 PM PDT

I have a function that empirically looks like this:

enter image description here

What analytical expression could yield this? It doesn't have to be exact; an approximate fit is good enough.

In usual topology. If $\overline{X}\cap\overline{X^c} = \emptyset$ then $X=\mathbb{R}$ or $X=\emptyset$

Posted: 11 Apr 2021 07:42 PM PDT

I'm asked to prove that, in the usual topology, if the intersection of the closure of a set with the closure of the complement of the same set equals the empty set, then it must be the case that such set is either the empty set or the real numbers. This is:

If $\overline{X}\cap\overline{X^c} = \emptyset$ then $X=\mathbb{R}$ or $X=\emptyset$.

Assuming $X\neq\emptyset$, I'm struggling with the "complicated" block of the proof, this is that $\mathbb{R}\subset X$. So far my approach has been a proof by contradiction, taking $x\in\mathbb{R}$ and assuming $x\notin X$. Thus $x\in X^c$ and, since $X^c \subset \overline{X^c}$, $x$ is contained in the open set $\overline{X^c}$. Moreover, by hypothesis, if $x \in \overline{X^c}$ then $x \notin \overline{X},\ x\in\overline{X}^{\ c}$.

Additionaly, given that $\overline{X}\cap\overline{X^c} = \emptyset$, I obtain $\overline{X^c} \subset \overline{X}^{\ c}$.

Do you have any thoughts on how to proceed from here towards a contradiction? Or is there a more elegant way to prove this?

Thank you very much.

Solve the equation for x: mx + ce^(mx/c) = (c + pc + tm)/(1-p)

Posted: 11 Apr 2021 07:39 PM PDT

I am trying to find the maximum values of this function: f(x)=(1-e^(-mx/c))(pt - (1-p)(t-x))

I took its derivative and set it equal to zero: f'(x) = e^(-mx/c)((1-p)(1 - mx/c) + t*m/c) - (1-p) = 0

But after simplifying, I couldn't solve this equation for x: mx + ce^(mx/c) = (c + pc + t*m)/(1-p)

Does anyone have any ideas on how to solve it?

Fourier Arbitrary-Phase Sinusoid Series

Posted: 11 Apr 2021 07:36 PM PDT

A Fourier Cosine Series uses an infinite sum of only cosine waves to represent a target function whose left endpoint is $0$, by considering its even extension. The even extension seems to amount to just adding a reflected version of the original function to the original function.

A Fourier Sine Series uses an infinite sum of only sine waves to represent a target function whose left endpoint is $0$, by considering its odd extension. The odd extension seems to amount to just adding a rotated version of the original function to the original function.

On one hand, these cases do seem to have a certain special simplicity to them. However, there are many other types of transformations, as well as many other sinusoid phases. Is it possible, then, to create a different extension by adding a differently-transformed version of the original function to itself, with the end goal of using something other than $sin(x)$ or $cos(x)$ as the fundamental wave in its own Fourier ____ Series?

As a hypothesized example, would it be possible to add a version of a function to itself, which had undergone a transformation "half way between a reflection and a rotation," to create a Fourier Half-way-between-cosine-and-sine Series that uses $\cos(x - \frac{\pi}{4})$ as the fundamental wave? If not, is there any other way to create such a series with $\cos(x - \frac{\pi}{4})$ (or an arbitrary-phase sinusoid) as the fundamental wave?

Typesetting a covector

Posted: 11 Apr 2021 07:31 PM PDT

I understand the notation of upper and lower indices for components of vectors and covectors, but when I write a vector not in component form I use an over-arrow notation e.g. $\vec r$. Is there a standard notation for covectors? An under-arrow? A left arrow?

Symmetric random walk on a lattice: Probability distribution of number of sites that have been visited

Posted: 11 Apr 2021 07:29 PM PDT

I'm imagining a $2D$ lattice, but insight into higher dimensional lattices would also be helpful. My question is related to the number of intersections of a random walk on lattice with itself. However, that is not my exact question.

In particular, I want to know the probability distribution describing the number of visited unique grid points after a number $t$ steps. I know that the late-time position, with a little blurring, can be well-described by a gaussian with standard deviation proportional to $\sqrt{t}$, yielding root-mean-square expectation values proportional to $\sqrt{t}$. What I want to know is the behavior of the number of visited grid points. For example, does the number of visited grid points grow like $t$ or $\sqrt{t}$ or some other power law? Ultimately, I want to know asymptotic behavior, like the probability of having visited just a few dozen grid points after a very long time as a function of $t$ and $n$.

To make clear what I mean by visited grid points, I have the following pictures. The steps are ordered from light to dark.

For example, in the drawing below, this specific instantiation of a random walk has visited 7 grid points after 7 steps, which I have colored a light pink. It started at the middle left and ended in the upper right.

Random walk realization

Here's another realization of a random walk, this time reaching 5 grid points after 7 steps: enter image description here


To summarize, my question is: What is the probability distribution describing the number of visited unique grid points $n$ after a number $t$ steps on a $2D$ lattice? I am particularly interested in $n/t$ small.

Proving that $G$ is a group.

Posted: 11 Apr 2021 07:26 PM PDT

Let $G:=\{ (p,q) | p,q\in \mathbb{Q} , \ p^2-2q^2\neq 0\}$ and define binary operation $(p,q) \circ (r,s) =(pr+2qs, \ ps+qr)$ for $(p,q),(r,s) \in G.$ Prove that $G$ is a group.

I could prove (i)~(iii).

(i) Binary operation is defined.

(ii) By calculation, I can see $(p,q) \circ ((r,s) \circ (t,u))=((p,q) \circ (r,s)) \circ (t,u)$ for $(p,q),(r,s),(t,u) \in G$.

(iii) Identity element is $(1,0)$ and if $(p,q) \circ (r,s)=(r,s) \circ (p,q)=(p,q)$ for all $(p,q) \in G$, then $(r,s)=(1,0).$

But I couldn't prove the existence of inverse element.

$(p,q) \circ (r,s) =(pr+2qs, \ ps+qr)$ so I should solve $pr+2qs=1, ps+qr=0$.

Because $q=\dfrac{1-pr}{2s}$, $\quad ps+\dfrac{1-pr}{2s} r =0$. I couldn't solve this.

How can I find $r$ and $s$ ?

Hardy-Littlewood maximal function of $\log |x|$.

Posted: 11 Apr 2021 07:26 PM PDT

Consider the function $f: \Bbb R^n \to \Bbb R$ defined by $f(x)=\log |x|$ for $x \neq 0$ and $f(0)=0$. I'm trying to prove that the Hardy-Littlewood maximal function of $f$, $Mf$, equals $\infty$. That is, for every $x \in \Bbb R^n$, $$Mf(x) = \sup_{x \in B} \frac{1}{|B|} \int_B |f| = \infty,$$ where the supremum is taken over all balls $B$ in $\Bbb R^n$ containing $x$ and $|B|$ is the Lebesgue measure of the ball.

The reasoning I tried to use is that for any $x \in \Bbb R^n$ there is a ball $B$ containing $x$ and $0$. Since it contains $0$, $|f| \to \infty$ on $B$, and then $\frac{1}{|B|} \int_B |f| = \infty$. But I don't know if that's true, do we need that $|f|$ is exactly $\infty$ on $B$ to say that the integral on $B$ is $\infty$? In that case, would we need to define $f(0)=\infty$?

Irreducibility, dimension of $2\times 3$ matrices of rank at most 1

Posted: 11 Apr 2021 07:21 PM PDT

I'm very poor at abstract algebra.

Consider $X$, the set of $2\times3$ matricies with rank at most $1$ over a base field. With each matrix interpreted as a 6-tuple, show $X$ is irreducible and compute its dimension.

My attempt:

Elements of $X$ are of the form $\begin{bmatrix}a&ca&da\\b&cb&db\end{bmatrix}$. We know an affine variety is irreducible if it is an integral domain, but even finding $I(X)$ is stumping me. Under the Zariski topology, we want those polynomials in 6 variables which vanish on $X$. My gut goes to $(x_1y_2-x_2y_1)^2+(x_1y_3-x_3y_1)^2$ (top row is $x$ bottom row is $y$) which certainly vanishes on $X$, but is this the generator of $I(X)$? As far as dimension is concerned, I have no idea how to show maximality of any chain I find.

An estimate on Dini continuous function.

Posted: 11 Apr 2021 07:14 PM PDT

Let $\omega$ be such that $$ \int_{0}^1\frac{\omega(s)}{s}\,ds\;<\;\infty $$ ($\omega$ is Dini continuous at $t=0$).

Define $$ \varphi(\delta)=\sup_{0<t\leq 1} \frac{\omega(\delta t)}{\omega(t)}. $$

Prove that $$ \lim_{\delta\rightarrow 0}\varphi(\delta)=0. $$

If needed, one can assume that for some $0<\alpha<1$, the function $t^{-\alpha} \omega(t)$ is monotonically decreasing (or non-increasing) in $t\in (0,1]$.

Probability that independent event with .1 probability occurs after third iteration.

Posted: 11 Apr 2021 07:10 PM PDT

An airline claims that there is a 0.10 probability that a coach-class ticket holder who flies frequently will be upgraded to first class on any flight. This outcome is independent from flight to flight. Sam is a frequent flier who always purchases coach-class tickets. What is the probability that Sam's first upgrade will occur after the third flight?

Since its independent wouldn't it just be 10% or since its after the third flight would it be (.9)^3

Boundary the closure is included in the boundary

Posted: 11 Apr 2021 07:19 PM PDT

Id like to show that for a subset $S$ of a $\mathbb{R}^2$, the following two things hold:

  1. $\text{bd}(\text{int}(S)) \subseteq \text{bd}(S)$
  2. $\text{bd}(\text{cl}(S)) \subseteq \text{bd}(S)$

This is what I need to prove that if $S$ is reimann measurable, then so are it's interior and its closure.

Lebesque integrable functions

Posted: 11 Apr 2021 07:07 PM PDT

If f is non-negative and improper Riemann integrable on [0, ∞), then prove or disprove that f is Lebesgue integrable.

Uniqueness of additive inverse

Posted: 11 Apr 2021 07:17 PM PDT

Using only these properties:

A1. x + (y+z) = (x+y) + z
A2. x + y = y + x
A3. There is an integer zero such that x + 0 = x
A4. To each integer x there corresponds an integer (-x) such that x+(-x)=0

Show that given x, (-x) is unique based on A1-A4

So far I've gotten

  • Assume A1-A4, y=-x, z=-x, and y != z
  • For all x: x+y = 0 based on A4
  • For all x: x+z = 0 based on A4
  • If x=z, z+y = 0
  • If x=y, y+z = 0
    However, I'm stuck from here, any tips on how to continue the proof?

When is this group compact?

Posted: 11 Apr 2021 07:41 PM PDT

Let $K=\begin{pmatrix} I_a & 0\\ 0 & -I_b \end{pmatrix}$, where a+b=n, and $I_a$ is the $a\times a$ identity matrix and similar definition for $I_b$. Let G be the group consisting of all complex $n\times n$ matrices such that $A^*KA=K$. For what values of a and b is G compact?

If $a=0$ or $b=0$, this just reduces to the unitary group, which is compact. However, if $n=2$ and $a=b=1$, then $A=\begin{pmatrix} c & \sqrt{c^2-1}\\ \sqrt{c^2-1} & c \end{pmatrix}$ is not bounded and in G. Is there a way to generalize this?

$R$ is Jacobson iff $R[x]$ is Jacobson

Posted: 11 Apr 2021 07:13 PM PDT

Let $R$ be a commutative ring. I already know how to prove that if $R$ is Jacobson then $R[x]$ is Jacobson, but I don't know how to prove that if $R[x]$ Jacobson then $R$ is Jacobson. Any help will be appreciated.

Where can I post a puzzle.

Posted: 11 Apr 2021 07:40 PM PDT

I have created a puzzle of the false mathematical proof variety, and I'd like to post it somewhere for people to have a go at. Once upon a time, I would have posted it to rec.puzzles, but those days are long gone.

Can anyone suggest a place where I could post it? I should point out that solving it will require knowledge of set theory, so it's not a beginner problem.

Prove that $7|3^{41}-5$

Posted: 11 Apr 2021 07:38 PM PDT

I am trying to prove that $7|3^{41}-5$.

The way I have been approaching this problem is by trying to factor the exponent of $41$ into a product of smaller exponents that will help me find a number that is divisible by $7$ with a remainder of $5.$

I have not gotten far, but my idea is:

$3^{41} = (3^{2})^{20}\times 3 - 5$

The part that I am getting hung up on is the fact that $41$ is a prime number. In my class, we have done examples where the exponent is factorable, so there is not a lingering 3 being multiplied into the factored exponent. I believe I understand the process of how this works when the exponent is not a prime number, but I cannot seem to understand how to accomplish this proof given these parameters.

We have covered modular arithmetic in this class and I am thinking maybe I should be using that to handle this problem, but I am not sure how to do so.

Some properties of the content of a polynomial

Posted: 11 Apr 2021 07:04 PM PDT

  1. Let $F \in \mathbb{Q}[x]$. Then exists $a,b \in \mathbb{Z}$, with $gcd(a,b)=1$, and a primitive polynomial $f \in \mathbb{Z}[x]$ that $F = \dfrac{a}{b}f$,

  2. Let $f,g \in \mathbb{Z}[x]$. Show using Gauss's lemma that: $$cont(fg)=cont(f)cont(g)$$

My idea for 1 was took $a=gcm(a_0,...a_n)$ and $b=lcm(b_0,...,b_n)$, then picking $f = \dfrac{b}{a}F$, but I'm not sure of it.

For 2, I though doing $f = cont(f).f^*$ and $g=cont(g).g^*$, with $f^*$ and $g^*$ being primitive polynomials. Then we would have $f^*g^*$ primitive by Gauss's lemma, and then: $$cont(fg) = cont(cont(f)cont(g)f^*g^*) = cont(f)cont(g)cont(f^*g^*) = cont(f)cont(g).$$ because by definition $cont(f^*g^*)=1$.

OBS: $cont(f)$ is the content of the polynomial $f$.

State space for Markov chain.

Posted: 11 Apr 2021 07:32 PM PDT

I'm trying to do this exercise from Probability by John Walsh:

Let ${X_{n}, n = 0,1, 2, ... }$ be a Markov chain whose transition probabilities MAY NOT be stationary. Define $X'_{n}$ to be the n-tuple $X'=(X_{0},. .. , X_{n})$. Show that $(X'_{n})$ is a Markov chain with STATIONARY transition probabilities. (What is its state space?)



If I apply the stationarity condition I should obtain something like:

$P(X'_{n+1}=j|X'_{n}=i)=P(X'_{1}=j|X'_{0}=i).$

But, for example, the values that would take $X'_{n}$ are n-tuples meanwhile the values for $X'_{0}$ are just 1-tuples.

I want to find a "common" state space for the chain and show the claimed stationary condition.

Thanks in advance.

When does the average of directional derivatives (for a univariate function) match the weak derivative?

Posted: 11 Apr 2021 07:05 PM PDT

This question relates to another question I asked here. Suppose we have a univariate function $f:\mathbb{R} \rightarrow \mathbb{R}$ where the directional derivatives $f_\downarrow':\mathbb{R} \rightarrow \mathbb{R}$ and $f_\uparrow':\mathbb{R} \rightarrow \mathbb{R}$ exist over the whole domain. Suppose we then define the "pseudo-derivative" as the average of the directional derivatives:

$$f_*'(x) = \frac{f_\downarrow'(x) + f_\uparrow'(x)}{2} \quad \quad \quad \text{for all } x \in \mathbb{R}.$$

I would like to know the circumstances in which this "pseudo-derivative" $f_*'$ is equal to the weak derivative of $f$. I have noticed that this pseudo-derivative matches the weak derivative in the case of the absolute value function, so I wonder if there is a more general correspondence, or some reasonable sufficient conditions for correspondence.


My Question: Under what conditions on $f$ will this "pseudo-derivative" match the weak derivative of $f$?

Determine if the series representation is true or not

Posted: 11 Apr 2021 07:08 PM PDT

A double factorial is such that $(2n)!!=(2n)(2n-2)...(2),$ and $(2n-1)!!=(2n-1)(2n-3)...3$. So $8!!=(8)(6)(4)(2)$. More information here: https://en.wikipedia.org/wiki/Double_factorial

With this in mind, I want to know 2 things. Is it true that

$$2^{-\frac{3}{2}}=2\sum_{n=1}^{\infty}(-1)^{n-1}\frac{(2n-1)!!}{(2n)!!}n$$

I tried to convert $2^{-\frac{3}{2}}$ into its power series representation...and failed. What I did was $2^{-\frac{3}{2}}=e^{\ln(2^{-\frac{3}{2}})}= \sum_{n=0}^{\infty}\frac{\ln(2^{-\frac{3}{2}})}{n!}$, but this is no where close to what I want. If there is a way to express these double factorials differently, please inform me of it.


And is this true:

$$\pi = 3\sum_{n=0}^{\infty}\frac{(2n+1)!!}{(2n+1)^2(2n)!!4^n}$$

This one I really have no idea how to approach. I expanded and simplified the factorials getting $3\sum_{n=0}^{\infty}\frac{(2n-1)(2n-3)...(1)}{(2n+1)(2n)...(2)4^n}$. At most it can be simplified to $\frac{(2n-1)!!}{4^n (2n+1)!!}$. Are there special properties of these double factorials that I can use? I also vaguely remember that there is a series representation of the binomial theorem that looks similar to this. Not sure how it would help though.

How do I prove that if a = -a then a=0?

Posted: 11 Apr 2021 07:09 PM PDT

To be proved using axioms & known theorems

P5 - Multiplicative Associativity
P6 - Existence of multiplicative identity
P7 - Existence of multiplicative inverse
P8 - Multiplicative commutativity
P9 - Distributive property

Th 1: ax0=0
Th 2: axb=0 if & only if a=0 or b=0
Th 3: -a=(-1)xa
Th 4: -(-a)=a
Th 5: (-1)(-1)=1

Probability of 4 parents sharing 2 birthdays

Posted: 11 Apr 2021 07:10 PM PDT

I'm a math idiot and I'm trying to figure something out. My mother was born on April 10th as was my friend's mother. My father was born on July 4th as was my friend's father.

How do I calculate the probability of that happening?

EDIT: How would one calculate the probability of 2 pairs of shared birthdays? I think this is the answer I'm looking for

Halp.

-Brett

Supremum of a sequence of real numbers

Posted: 11 Apr 2021 07:22 PM PDT

Let $(b_n)_{n\geq 1}$ be a sequence of real numbers whose terms are defined as follows :

$b_1 :=x+\frac{22}{13^2} -\frac{2}{13}y$, $b_2 :=x+\frac{22}{14^2} -\frac{2}{14}y$ and so on, where $x \in \mathbb R$ and $ y \in \mathbb N$ and $y \geq 2$ is arbitrarily fixed.

I need to calculate $\text{Sup}_{n \in \mathbb N}(b_n)$.

Claim: $\text{Sup}_{n \in \mathbb N}(b_n) =x$.

Attempt : It's not difficult to guess by inspection that the sequence is strictly increasing. For that purpose it's enough to check that the function $f(m)= \frac{22}{m^2}-\frac{2y}{m}$ is strictly decreasing for $m \geq 13$. For this we need that $2m^2 > 20m +11$ for all $m \geq 13$.

Once we have shown that the sequence is increasing we note that the terms $\frac{22}{m^2}-\frac{2y}{m}$ goes to $0$ as $m$ increases (Here is the place where I need more rigour). Therefore the sequence has supremum $x$

Please correct me if the above argument is not correct. Also it will be valuable to me if someone points out a more rigorous solution.

Any help from anyone is welcome.

Is $X+Y+XY$ and $X+Y-XY$ is isomorphic formal group law over integer ring?

Posted: 11 Apr 2021 07:18 PM PDT

I would like to check whether $X+Y+XY$ and $X+Y-XY$ is isomorphic formal group law over integer ring $\Bbb{Z}$ or not.

It is known that it is isomorphic over rational field.

But what about in $\Bbb{Z}$?

I would like to find a power series $f\in \Bbb{Z}[[X]]$ such that $f(X+Y+XY) = f(X)+f(Y)-f(X)f(Y)$. Thank you in advance,

Upper bound of the dimension of a vector space of functions with conditions on roots on a line

Posted: 11 Apr 2021 07:26 PM PDT

Let $L$ be a vector space of functions from $\mathbb F^n$ to $\mathbb F$ such that if $f \in L$ and $f(x) = 0$ for more than $d$ points $x$ on a line in $\mathbb F^n$, then $f(x) = 0$ for all points on the line. Prove that the dimension of $L$ is at most $(d+1)^n$.

[Remark]

Equivalently, we can prove that if $\operatorname{dim}(L) > (d+1)^n$, then $\exists f \in L$ such that there are $d + 2$ points $\{x_i\}_{i \in [d+2]}$ on a line in $\mathbb F^n$ such that $f(x_i) = 0, \forall i \in [d+1]$ but $f(x_{d+2}) \neq 0$.

For each $f$, and $v, w \in \mathbb F^n$, $w \neq 0$, we let $\tilde{f}_{v,w}: \mathbb F \rightarrow \mathbb F$ be defined as $\tilde{f}_{v,w}(t) = f(v + t w)$. The condition says that if $f \in L$, then for each $v, w$, $\tilde{f}_{v,w}$ has $\leq d$ zeros or $\tilde{f}_{v,w} \equiv 0$.

[n=1]

When $n = 1$, assume $L$ has dimension $m > d + 1$, let $\{f_i\}_{i \in [m]}$ be a basis of $L$. Choose $d + 1$ distinct points (on a line which is the whole $\mathbb F$) $\{y_i\}_{i \in [d+1]} \subset \mathbb F$ and consider the system of $d+1$ linear equations $\sum_{i \in [m]} c_i f_i (y_j) = 0, j \in [d+1]$ on $m > d+1$ variables $(c_i)_{i \in [m]}$. Therefore, there must exist a nontrivial solution $(c_i)_{i \in [m]}$. However, because $L$ is a vector space, $g := \sum_{i \in [m]} c_i f_i \in L$ and now we have $g \not\equiv 0$ but $g$ vanishes on $d+1$ distinct points on a line, contradicting with the given condition.

I still have some problems generalizing the above idea to $n > 1$. I tried to use induction on $n$ but my current proof seems incorrect.

Assume that the proposition is true when $n \leq k$. When $n = k + 1$, for $f: \mathbb F^{k+1} \rightarrow \mathbb F$ and $v \in \mathbb F$, we let $f_v: \mathbb F^{k} \rightarrow \mathbb F$ be defined as $f_v(x_1, ..., x_k) = f(x_1, ..., x_k, v)$. Clearly, if $f \in L$, i.e., $f$ vanishes on more than $d$ points on a line implies that $f$ vanishes on the whole line, then $f_v$ has the same property, for all $v$. Therefore, by the induction hypothesis, the space $L_v := \{f_v: f \in L\}$ has dimension at most $(d+1)^k$, for each fixed $v$. On the other hand, for $f \in L$ and $X = (x_1, ..., x_k)$, we let $f_X: \mathbb F \rightarrow \mathbb F$ be defined as $f_X(v) = f(x_1, ..., x_k, v)$, which clearly satisfies the condition with $n = 1$. Therefore, the dimension of $L$ is at most $(d+1)^k \times (d+1) = (d+1)^{k+1}$.

Tips/tool required for writing fast in latex?

Posted: 11 Apr 2021 07:22 PM PDT

In my institute writing assignments in LaTeX is mandatory. Most of my assignment text involve plenty of mathematical expressions and figures like $\lim, \log, \Theta, \bigoplus $ etc. and many more. I first solve my assignments using pen and paper, then write them on LaTeX. It takes me around $2$ hours to compete writing one page in LaTeX, and my assignments are generally $6-7$ pages, so I spend huge time just for typing my answers in LaTeX. I don't spend much time searching for mathematical figures in mathjax anymore, I am comfortable with mathjax, but typing those formats, then reviewing the pdf, change again after I see formatting mistakes, then again reviewing, then I want to change the view a bit for clearity of text etc.. all together I am spending huge time. I have also got a book on LaTeX, but due to very tight course schedule in my institution, I am not getting any time to read that book. I also tried building a tool myself, where I type details in word doc or excel, and the tool automatically converts it to LaTeX formatting, but it's not full proof, and it requires regular maintenance as and when I include more and more symbols in my assignments.

Any tips, or any direction if any tool available for writing fast in LaTeX, will be of great help.

Show $I(V(f))=(f)$ when $f$ is irreducible.

Posted: 11 Apr 2021 07:17 PM PDT

This is a question from Perrin's text, and it goes like this: Let $k$ be algebraically closed. Let $F\in k[x,y]$ be an irreducible polynomial. Assume that $V(F)$ is infinite. Prove that $I(V(F))=(F)$.
Here, $V(f)$ is the set of zeroes of the polynomial, and $I(V(F))$ is the ideal $(V(F))$.

Proof: Since $k$ is a field, $k[x,y]$ is a unique factorization domain. So if $F$ is an irreducible polynomial in $k[x,y]$ that's the same as being a prime element. So $F$ generates a prime ideal. Next, we have that $\textbf{rad}(F)=(F)$, since $(F)$ is a prime ideal. So by the Nullstellensatz, $(F)=I(V(F))$.

That is my proof for the problem, but nowhere in my proof did I use the hypothesis that $V(F)$ was infinite. So I was wondering if my proof is valid, or if I made some wrong assumption along the way. Also this proof would work for $k[x_1,...,x_n]$, and the problem only asks for $k[x,y]$, so I'm extra dubious about its correctness.

properly embedded submanifold

Posted: 11 Apr 2021 07:06 PM PDT

Definition: Let $Y$ a smooth manifold, a proper submanifold of $Y$ is a submanifold $(X,\phi)$ such that $\phi:X\to Y$ id s proper map.

I will appreciate some hint to solve the following exercise Question: Let $f:M\to N$ a submersion and let $P\subset N$ a proper submanifold, show that $f^{-1}(P)\subset M$ is a proper submanifold.

Here is what I tried

Because $P\subset N$ a proper submanifold, then the inclusion map $i_N:P\to N$ is proper.

Let us prove that the inclusion map $i_M:f^{-1}(P)\to M$ is proper. Let $K\subset M$ a compact set, we have to proof that $i_M^{-1}(K)=K\cap f^{-1}(P)$ is compact.

Edited

Let $(V_i)_{i\in I}$ an open cobering of $i_M^{-1}(K)=K\cap f^{-1}(P)$ on $ f^{-1}(P)$, then there are open sets $U_i$ of $M$ such that $$V_i=U_i\cap f^{-1}(P),$$ then $$K\cap f^{-1}(P)\subset\bigcup_{i\in I} V_i=\bigcup_{i\in I} \left( U_i\cap f^{-1}(P)\right)=\left(\bigcup_{i\in I}U_i \right)\cap f^{-1}(P).$$

Hence \begin{equation}\label{1081} K\cap f^{-1}(P)\subset\left(\bigcup_{i\in I}U_i \right)\cap f^{-1}(P).....(*) \end{equation}

On the other hand, \begin{equation}\label{1082} f^{-1}(f(K)\cap P)=f^{-1}(f(K))\cap f^{-1}(P) .....(**) \end{equation}

and $$K\cap f^{-1}(P)\subset f^{-1}(f(K))\cap f^{-1}(P),\qquad(\mbox{ since } K\subset f^{-1}(f(K))),$$ then in em $(**)$ we get

$$K\cap f^{-1}(P)\subset f^{-1}(f(K))\cap f^{-1}(P)= f^{-1}(f(K)\cap P)$$

Here is where I get stuck.

What I want to get is that $$f(K)\cap P\subset\displaystyle\bigcup_{i\in I}f(U_i)\cap P$$ and since $i_N:P\to N$ is proper, and $f(K)$ is compact, then $f(K)\cap P$ is compact on $P$, and I get a finite subcovering and finish the proof.

No comments:

Post a Comment