Saturday, June 4, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Three dimensional Cairo Pentagonal Tiling

Posted: 04 Jun 2022 07:28 AM PDT

I am interested in three dimensional Cairo pentagonal tiling. I would like to ask if anyone suggest me reference(s) about it

Complete bipartite graph in a bipartite graph

Posted: 04 Jun 2022 07:26 AM PDT

I would like to get some help with the following question:

Suppose G is a bipartite graph on vertex sets A, B and G has |A||B|/k edges. Show that there are A' ⊆ A and B' ⊆ B so that A',B' is a complete bipartite graph and so that |A'| ≥ |A|/k and |B'| ≥ |B|/(2^|A|).

integrability of $x^a/(e^x-1)$

Posted: 04 Jun 2022 07:26 AM PDT

I'm trying to study if the function $f(x) = \dfrac{x^a}{e^x-1}$ is integrable over $\mathbb{R}^+$, where $a \in \mathbb{R}$. I'm done with all cases for $a$, except for $0 < a < 1$.

I haven't been able to figure out any bound for it, or inequality that could help me out. I feel like $f$ isn't integrable over $\mathbb{R}^+$, but I can't prove it.

Some help would be appreciated :)

Fourier series of absolutely continous function

Posted: 04 Jun 2022 07:22 AM PDT

Consider function $f$ which $2\pi$-pereodic and absolutely contionous on $[-\pi, \pi]$.

Prove, that Fourier coefficients in series expansion $f(x) = \frac{a_0}{2}+\sum\limits_{n=1}^\infty(a_n\cos nx+b_n\sin nx)$ are $a_n=o(1/n)$ and $b_n=o(1/n)$.

I tried calculating $a_n = \int\limits_{-\pi}^\pi f(x)\cos(nx)dx$ by parts, in order to get $\int\limits_{-\pi}^\pi nf(x)\sin(nx)dx$ and prove that it tends to $0$, but didn't achieved any good results. Also, I don't know what properties of absolute continuity are useful here.

Does continuity of $f(z)$ imply that of $\overline{f(\overline{z})}$?

Posted: 04 Jun 2022 07:21 AM PDT

I was stuck on a problem with complex-valued functions. Here is the question:

If a function $f(z)$ is continuous at a point $z=z_0$, then does this imply that the induced function $\overline{f(\overline{z})}$ is continuous at $z=z_0$? Much help required. Thank you so much!!

Is the order of a field automorphism equal to the degree over the fixed-point subfield?

Posted: 04 Jun 2022 07:20 AM PDT

Given any field $L$ and any automorphism $f:L \to L$, one could define the fixed-point subfield $K := \{x \in L \mid f(x)=x\}$ in the obvious way.

Now, suppose that $f$ has finite order $n$. Does this mean that the field extension $L/K$ has degree $n$ (i.e., $L$ is an $n$-dimensional $K$-vector space)?

The statement is true in a few examples:

  • The identity has order $1$, and clearly, the fixed-point subfield is all of $L$ and $L$ is $1$-dimensional over $L$.
  • Complex conjugation has order $2$, and the complex numbers are $2$-dimensional over the real numbers, which form the fixed-point subfield.
  • The same is true for $\mathbb{Q}(\sqrt{x})$ where $x$ is any rational number that is not the square of another rational number.
  • If $L$ is a finite field of order $p^n$ and $f$ is the Frobenius automorphism (i.e., $f(x)=x^p$), then the fixed-point subfield is the prime subfield of $L$, and clearly, $L$ is $n$-dimensional over its prime subfield.

Is the statement true in general, or are additional assumptions needed (e.g., $L$ is perfect, or $L$ has characteristic $0$)?

How to show that the integral $\int_{0}^ {\infty} \frac{x^n}{1+x^m}$ converge when $m > n+1$ when $m,n$ are both positive integers?

Posted: 04 Jun 2022 07:16 AM PDT

How to show that the integral $\int_{0}^ {\infty} \frac{x^n}{1+x^m}$ converge when $m > n+1$ when $m,n$ are both positive integers?

I have tested this for specific numbers and it looks like we need to use partial fraction decomposition. Is there some general formula for that we can use here?

How do I prove that if $\sum_{n=1}^\infty \Bbb{P}(A_n)< \infty$ then $\Bbb{P}\left(\limsup_{n\rightarrow \infty} A_n\right)=0$?

Posted: 04 Jun 2022 07:22 AM PDT

I have the following problem.

Let $(\Omega, F,\Bbb{P})$ be a probability space. Show that if $\sum_{n=1}^\infty \Bbb{P}(A_n)< \infty$ then $$\Bbb{P}\left(\limsup_{n\rightarrow \infty} A_n\right)=0$$

My idea was the following:

Proof We know by definition $\Bbb{P}\left(\limsup_{n\rightarrow \infty} A_n\right)=\Bbb{P}\left(\bigcap_{n\geq 1}\bigcup_{k\geq n}A_k \right)$. Now let $$B_n:=\bigcup_{k\geq n}A_k$$Then we see that $B_n\supset B_{n+1}$ for all $n\in \Bbb{N}$. Hence we can use the monotonicity from above of the probability measure and get $\Bbb{P}\left(\bigcap_{n\geq 1}\bigcup_{k\geq n}A_k \right)=\lim_{n\rightarrow \infty}\Bbb{P}(B_n)=\lim_{n\rightarrow \infty}\Bbb{P}\left(\bigcup_{k\geq n}A_k\right)\leq\lim_{n\rightarrow \infty} \sum_{k=n}^\infty \Bbb{P}(A_k)\stackrel{\sum_{n=1}^\infty \Bbb{P}(A_n)< \infty}{=}0$

Hence $$0\leq \Bbb{P}\left(\limsup_{n\rightarrow \infty} A_n\right)\leq 0$$ Which proves the claim.

Does this work or is this wrong?

Thanks for your help

Find $\lim(\int_{0}^{1}\left((1-t)a+tb)^x)dt\right)^{(1/x)}$ when $x\to0$ and $x\to\infty$.

Posted: 04 Jun 2022 07:12 AM PDT

My first step was to do $$(1-t)a+tb=u$$ so the integral will be $$\frac{1}{b-a}\int_{a}^{b}u^xdx=\frac{b^{x+1}-a^{x+1}}{(b-a)(x+1)}$$ then we need to find $$\lim\left(\frac{b^{x+1}-a^{x+1}}{(b-a)(x+1)}\right)^{(1/x)}$$ and I haven't made any progress from that point. I tried to apply logarithms and Stolz-Cesàro Theorm, but none of that worked for me. I accept any help to simplify this limit and find your value. Thanks for attention.

Size issues: does the "limits are computed pointwise in functor categories" theorem only apply if the functor category is locally small?

Posted: 04 Jun 2022 07:15 AM PDT

$\newcommand{\A}{\mathscr{A}}\newcommand{\I}{\mathscr{I}}\newcommand{\psh}{\mathsf{Psh}}\newcommand{\Set}{\mathsf{Set}}\newcommand{\op}{^\mathsf{op}}\newcommand{\ev}{\mathsf{ev}}$I am studying category theory from Tom Leinster's Basic Category Theory. In it, since it is a "basic" text, size issues are not discussed in detail, but briefly covered. However in chapter $6$ he has a strange, unexplained insistence on smallness. An important example ($\psh_\A=[\A\op,\Set]$ the category of presheaves):

$(\ast)$ Let $\A$ be a small category. Then $\psh_\A$ has all limits and colimits, and they are preserved (and jointly reflected) by all evaluation functors $\ev_A:\psh_\A\to\Set,\,X\mapsto X(A)$, $A\in\A$.

This is a corollary of a more general result:

Let $\A,\mathbf{I}$ be small categories and $\I$ a locally small category. Suppose for some diagram $D:I\to[\A,\I]$ that for every $A\in\A$, the composite diagram $\ev_A\circ D$ has a limit in $\I$. Then $[\A,\I]$ has all limits of shape $\mathbf{I}$ and they are preserved (and jointly reflected) by the evaluation functors.

His only justification for this is that $[\A,\I]$ is guaranteed to be locally small under these conditions. Why this is necessary is not expounded upon. My own best guess for this is that the proofs of the above rely on evaluation functors, and if $[\A,\I](X,Y)$ is a large class for some $X,Y$, then $\ev_A$ runs the risk of embedding a large class of natural transformations into a set of arrows $\I(X(A),Y(A))$ which cannot work. That said, nlab claim this theorem $(\ast)$ holds for presheaf categories regardless of whether or not $\A$ is small (it need only be locally small, apparently).

So, which is it? Can I really assume that $(\ast)$ holds if $\A$ is relaxed to be locally small?

I have two motivations for this question. The first is general interest (since the pointwise limit theorem is very powerful), and the second is in the following exercise:

Let $\A$ be a locally small category and $B\in\A$. Show that representables have the following connectedness property: if there exist $X,Y\in\psh_\A$ such that $\A(-,B)\cong X+Y$, exactly one of $X,Y$ are the degenerate constant empty functors.

My solution needs the pointwise limit theorem to apply even for $\A$ locally small:

Modifying $X,Y$ up to isomorphism we have that $\A(A,B)=(X+Y)(A)=X(A)+Y(A)$ for all $A\in\A$ by the pointwise limit theorem, and $(Xp)(f)=f\circ p$ for all $p\in\A(A',A)$, $f\in X(A)\subseteq\A(A,B)$; the same holds true for $Y$.

As $\A(B,B)$ contains $1_B$, always, without loss of generality suppose $1_B\in X(B)$. For any $A\in\A$, if there exists $p\in Y(A)$ then $-\circ p:\A(B,B)\to\A(A,B)$ is equal to the action $Xp:X(B)\to X(A)$, so in particular $1_B\circ p=p\in X(A)$ despite $p\in Y(A)\implies p\notin X(A)$, a contradiction. Therefore there never exists a $p\in Y(A)$, for any $A\in\A$, so $Y$ is the constant empty functor.

If the three vectors are co-planar, then what is the value of $a$?

Posted: 04 Jun 2022 07:07 AM PDT

Question:

What should be the value of $a$ so that the three vectors $2\hat{i}+\hat{j}-\hat{k}$, $3\hat{i}-2\hat{j}+4\hat{k}$ and $\hat{i}-3\hat{j}+a\hat{k}$ are coplanar?

(a) 5

(b) 7

(c) 4

(d) 3

My attempt:

Using the scalar triple product, I found that a=5. So, (a) is correct.

My instructor's attempt:

$$\begin{vmatrix} 2 & 1 & -1\\ 3 & -2 & 4\\ 1 & -3 & a \end{vmatrix}=0$$

And then they solved for a. They also found that a=5. So, (a) is correct.

My question:

  1. I've never seen the problem done in the way my instructor did. I always have done this type of problem using the scalar triple product. Where does my instructor's attempt originate from? What are the differences between my attempt and my instructor's attempt, if any?
  2. Using the scalar triple product, we get the volume enclosed by the three vectors. If the volume is zero, we conclude that the vectors are co-planar. Can the volume enclosed by the vectors be found using my instructor's process?

If $f \in C^2 (\mathbb{R}^2; \mathbb{R})$, then $f$ has at least one critical point.

Posted: 04 Jun 2022 07:02 AM PDT

If $f \in C^1 (\mathbb{R}^3; \mathbb{R})$, then $f$ has at least one critical point.

This is false, right? An example: Let $f(x,y,z) = x$, then $f'(x,y,z) = 1$, which means $f$ has no critical points. However, $f(x,y,z) = x$ is not of class $C^2$. Can anyone give a example that would work to prove the statment false?

Probability of union of independent Poisson point processes intersected with bounded set

Posted: 04 Jun 2022 07:04 AM PDT

Let $X=\bigcup_{i=1}^{\infty}X_{i}$ be a union of mutually independent Poisson point processes on $S\subseteq\mathbb{R}$, and let $B\in\mathscr{B}(S)$ be bounded. Then: $$\tag{1} \mathbb{P}(X\cap B=\emptyset)=\prod_{i=1}^{\infty}\mathbb{P}(X_{i}\cap B=\emptyset). $$

My question: Why does (1) above hold true?

My thoughts: A sequence of random variables $X_{1},\dots,X_{n}$ are said to be independent if their induced sigma algebras $\sigma(X_{1}),\dots,\sigma(X_{n})$ are independent. That is: $$\tag{2} \mathbb{P}\left(\bigcap_{i=1}^{n}A_{i}\right)=\prod_{i=1}^{n}\mathbb{P}(A_{i}),\quad A_{i}\in\sigma(A_{i}). $$ I've been trying to relate (2) to (1), but I can't seem to make the connection. What am I missing??

How do I linearize $Y=A+Be^{CX}$

Posted: 04 Jun 2022 07:14 AM PDT

I am trying to linearize the following equation: $Y=a+be^{cx}$ where $Y$ and $x$ are given data points and $a$,$b$ and $c$ are constants that needs to be found. And by rearranging the equation, I got to this point:

$$\begin{align*} \ln Y &= \ln a+\ln(be^{cx}) \\ \ln Y &= cx + \ln a + \ln b \\ \ln Y &= cx + \ln(ab) \end{align*}$$

Using a graphical software, I linearized the formula ($y=mx+\text{intercept})$ and got my constant $c$. However I am stuck at finding exact values of constants $a$ and $b$. This is where I have reached:

$$\ln(ab)=p$$

$$a=\frac{e^p}b$$

where $p$ is the $y$-intercept which is known. Any leads/approach to finding the exact values for constants $a$ and $b$?

Bound/free variables in $\sum_{k=1}^{10}f(k,n)$

Posted: 04 Jun 2022 07:17 AM PDT

I'm looking at this treatment of bound and free variables. Down a bit is this

$$\sum_{k=1}^{10}f(k,n) $$

but then the cryptic explanation

$n$ is a free variable and $k$ is a bound variable; consequently the value of this expression depends on the value of $n$, but there is nothing called $k$ on which it could depend.

Can anyone tell me what they're saying here? If $f(k,n) = (k + n)$, i.e.,

\begin{align} \sum_{k=1}^{3}(k+n) &= (1+n) + (2+n) + (3+n) \\ &= (1+2+3) + (n+n+n) \\ &= 6 + 3n \end{align}

It's obvious that $n$ is an unknown variable from elsewhere throughout summing over $k$, but I don't get the wording of the above quote.

Find the number of factors of some number which are perfect squares

Posted: 04 Jun 2022 07:18 AM PDT

The problem is "Find the number of factors of the product $5^8 \cdot 7^5 \cdot 2^3$ which are perfect squares"

With a simple google search, the answer is happened to be 30 but I don't have any idea how.

What is the maximum relative density of squares congruent to m modulo n for chosen m and n? [closed]

Posted: 04 Jun 2022 07:02 AM PDT

Let f(m,n) be the number of values of k between 1 and n such that k^2 is congruent to m modulo n. I call f(m,n)^2/n the relative density of squares modulo n. For m = 1 and n = 24, this relative density is 8/3. Is any higher relative density possible? This question is important because it helps discover the distribution of the squares and might be related to things such as quadratic reciprocity.

How to show that the integral of $\int_{R^n }(1 + |x|)^{-L} dx$ exist when $L>n$

Posted: 04 Jun 2022 07:15 AM PDT

How to show that the integral of $\int_{R^n }(1 + |x|)^{-L} dx$ exists (in the sense of Lebesgue integral) when $L>n$? I have computed the case for $R^2$ using polar coordinates and I am guessing we can use spherical coordinates for $R^3$. Now the problem though is that polar coordinates in higher dimensions get more and more complicated, but since our function is radial, shouldn't there be a much simpler version of polar coordinates we can use? Now if there is a way we can prove the statement without changing coordinates, I would like to know how.

Analytic formula for $E(\rho):=\sum_{m=0}^\infty \mu_m(\rho)/m!$, with $\mu_m(\rho) := \sum_{k=0}^{m-1}\dfrac{\rho^k}{k+1}{m \choose k}{m-1\choose k}$

Posted: 04 Jun 2022 07:03 AM PDT

Let $\rho \in (0,\infty)$ and for any integer $m \ge 1$, define $\mu_m \ge 0$ by $$ \mu_m := \sum_{k=0}^{m-1}\frac{\rho^k}{k+1}{m \choose k}{m-1\choose k}. $$

Finally define $E \ge 0$ by $$ E := \sum_{m=0}^\infty \frac{\mu_m}{m!} $$

Question. Is there an analytic formula $E$ in terms of $\rho$ ?

Context: $E$ corresponds to the expected value of the trace of the exponent of an $n \times d$ Wishart matrix, in the limit $n,d \to \infty$ with $n/d \to \rho$.


Related: https://mathoverflow.net/q/423906/78539

Prove that these four definition of $T_{3{\frac 1 2}}$ space are equivalent.

Posted: 04 Jun 2022 07:26 AM PDT

Definition

A subset $Y$ of a topological space $X$ is a zero-set if there exist a continuous real valued function $f:X\rightarrow\Bbb R$ such that $$ Y=f^{-1}[0] $$ So we say that a subset $Z$ of $X$ is cozero-set if $X\setminus Z$ is zero-set.

So we observe that a zero set is closed whereas a cozero-set is open and moreover the finite intersection or finite union of zero-set or cozero-set is a zero-set or a cozero-set respectively. Finally, we observe that the sets $$ \{x\in X:f(x)>a\}\quad\text{and}\quad\{x\in X:a<f(x)<b\}\quad\text{and}\quad\{x\in X:f(x)<b\} $$ are cozero-sets.

So we let give this definition.

Definition

A topological space $X$ is said completely regular if for any closed $C$ and for any $x\notin C$ there exists a continuous function $f$ from $X$ to $[0,1]$ such that $$ f(x)=0\quad\text{and}\quad f[C]\subseteq{1} $$

However it is a well know result that $[0,1]$ is homeomorphic to any other intervals $[a,b]$ between a function $\varphi$ such that $$ \varphi(0)=b\quad\text{and}\quad \varphi(1)=a $$ so that we observe that $X$ is completely regular if and only if for any for any closed set $C$ and for any $x\notin C$ there exists a continuous function $f$ from $X$ to $[0,1]$ such that $$ f(x)=b\quad\text{and}\quad f(x)=a $$

Now we give the following definition

Definition

A space $X$ is said $T_{3_{\frac 1 2}}$ if it is $T_1$ and completely regular.

So I tried to prove the following relevant result.

Theorem

For any $T_1$ space $X$ the following four statement are equivalent.

  1. $X$ is $T_{3_{\frac 1 2}}$
  2. the collection $C([0,1]^X)$ of continuous functions from $X$ to $[0,1]$ generates the topology on $X$, that is the collection $$ \mathcal S:=\big\{f^{-1}[V]: f\in C([0,1]^X)\quad\text{and}\quad V\,\text{open in }[0,1]\big\} $$ is a subbase for $X$.
  3. the collection of cozero-sets is a base for $X$
  4. there exists a base $\mathcal B$ such such that

4.1. for any open basic neighborhood $B_x$ of $x\in X$ there exists a basic open $B$ disjoint from $x$ and such that $$ B_x\cup B=X $$

4.2. for any pair $B_1$ and $B_2$ of basic open set there exists another pair $A_1$ and $A_2$ of basic open and disjoint set such that $$ X\setminus B_1\subseteq A_1\quad\text{and}\quad X\setminus B_2\subseteq A_2 $$

Proof.

  1. So if $A$ is an open set of $X$ then $X\setminus A$ is closed and so for any $x\in A$ there exists $f\in C([0,1]^X)$ such that $$ f(x)=1\quad\text{and}\quad f[X\setminus A]\subseteq\{0\} $$ that is such that $$ f(x)=1\quad\text{and}\quad X\setminus A\subseteq f^{-1}[0] $$ so that we conclude that $$ x\in f^{-1}\big[(0,1]\big]=f^{-1}\big[[0,1]\setminus\{0\}\big]=f^{-1}\big[[0,1]\big]\setminus f^{-1}[0]=X\setminus f^{-1}[0]\subseteq A $$ which proves that the collection $$ \big\{f^{-1}\big[(0,1]\big]:f\in C([0,1]^X)\big\} $$ is a base for $X$ and so the statement follows observing that $$ \big\{f^{-1}\big[(0,1]\big]:f\in C([0,1]^X)\big\}\subseteq\cal S $$
  2. First of all we observe that the inclusion $$ C([0,1]^X)\subseteq C(\Bbb R^X) $$ holds and thus by above observed for any $f\in C([0,1]^X)$ the set $$ f^{-1}\big[(\epsilon,1]\big] $$ with $\epsilon\in(0,1)$ is a cozero-set: thus clearly the statement follows proving that if $A$ is open then for any $x\in A$ there exists $f\in C([0,1]^X)$ such that $$ x\in f^{-1}\big[(\epsilon,1]\big]\subseteq A $$ So if $C([0,1]^X)$ generates the topology on $X$ then for any $x_0\in A$ there exists $f\in C([0,1]^X)$ and open basic $B$ of $[0,1]$ such that $$ x_0\in f^{-1}[B]\subseteq A $$ and thus we distinguish the case where $$ y_0:=f(x_0) $$ is zero from the case where it is not. $$ \bf{ CASE\,I}\\y_0=0 $$ So if $y_0$ is zero then without loss of generality we suppose that $$ B=[0,\epsilon) $$ for any $\epsilon\in(0,1)$ so that we observe that the position $$ \varphi(y):=1-y $$ for any $y\in [0,1]$ defines a homeomorphism $\varphi$ from $[0,1]$ to $[0,1]$ such that $$ \varphi^{-1}\big[(1-\epsilon,1]\big]=[0,\epsilon) $$ and thus we conclude that $\varphi\circ f$ is a continuous function from $X$ to $[0,1]$ such that $$ x_0\in(\varphi\circ f)^{-1}\big[(1-\epsilon,1]\big]\subseteq A $$ $$ \bf CASE\, II\\ y_0\neq 0 $$ If $y_0$ is not zero then we distinguish the following two cases

i.$B=B(y_0,\epsilon):=(y_0-\epsilon,y_0+\epsilon)$ with $\epsilon\in\Bbb R^+$

First of all we observe that the positions $$ \text{a. }f_1(y):=\frac y{y_0} \\ \text{b. }f_2(y):=\frac{1-y}{1-y_0} $$ for any $y\in\Bbb R$ define two continuous functions $f_1$ and $f_2$ so that we observe that the position $$ \varphi(y):=\begin{cases}f_1(y),\quad\text{if }y\le y_0\\ f_2(y),\quad\text{otherwise}\end{cases} $$ for any $y\in[0,1]$ define a continuous function from $[0,1]$ to $[0,1]$ - with respect the subspace topology. Now put $$ \varepsilon:=\max\big\{\varphi(y_0-\epsilon),\varphi(y_0+\epsilon)\big\} $$ and thus we let to prove that $$ \varphi^{-1}\big[(\varepsilon,1]\big]\subseteq B(y_0,\epsilon) $$ First of all we observe that $$ \varphi^{-1}\big[(\varepsilon,1]\big]=f^{-1}_1\big[(\varepsilon,1]\big]\cup f^{-1}\big[(\varepsilon,1]\big] $$ where $f^{-1}_i$ for $i=1,2$ is restriced to $[0,1]$, the domain of $\varphi$: so the statement follows proving that $$ f^{-1}_i\big[(\varepsilon,1]\big]\subseteq B(y_0,\epsilon) $$ for $i=1,2$. So we observe that $f_1$ is strictly increasing (and thus injective too) so that the inequality $$ y_0-\epsilon\le f^{-1}_1(\varepsilon)\le y_0 $$ holds and thus we conclude that $$ f^{-1}\big[(\varepsilon,1]\big]\subseteq (y_0-\epsilon,y_0]\subseteq B(y_0,\epsilon) $$ However we observe that $f_2$ is strictly decreasing (and thus injective too) so that the inequality $$ y_0\le f^{-1}_2(\varepsilon)\le y_0+\epsilon $$ holds and thus we conclude that $$ f^{-1}_2\big[(\varepsilon,1]\big]\subseteq [y_0,y_0+\epsilon)\subseteq B(y_0,\epsilon) $$

ii.$B=(\epsilon, 1]$ with $\epsilon\in(0,1)$

In this case $f^{-1}[B]$ is trivially a cozero neighborhood of $x_0$.

  1. Now let be $Z_f$ the set of zero points for any $f\in C(\Bbb R^X)$ whereas let be $Z_f^*$ its complement and thus finally let be $\cal Z^*$ the collection of this such sets which now we assume is a base so that we let to prove that it satisfies $4.1$ and $4.2$.

4.1 Well, if $x_0$ is an element of any $Z^*_f$ then putting $$ y_0:=f(x_0) $$ we observe that the position $$ \varphi(y):=y_0-y $$ for any $y\in\Bbb R$ defines a continuous function so that also $\varphi\circ f$ is continuous. So observing that $$ (\varphi\circ f)(x)=y_0\neq 0 $$
for any $x\in Z_f$ we conclude that $$ Z_f\subseteq Z^*_{\varphi\circ f} $$ that is $$ Z^*_f\cup Z^*_{\varphi\circ f}=X $$ Moreover we observce that $$ (\varphi\circ f)(x_0)=0 $$ that is $$ x_0\notin Z^*_{\varphi\circ f} $$ So $\mathcal Z^*$ satisfies $4.1$.

4.2 So we observe that if any $Z^*_{f_1}$ and $Z^*_{f_2}$ are disjoint when their union is $X$ then $\cal Z^*$ satisfies $4.2$ trivially so that we suppose there exists a not disjoint pair $Z^*_{f_1}$ and $Z^*_{f_2}$ whose union is $X$. So in this case we pick $x_0$ in $Z^*_{f_1}\cap Z^*_{f_2}$ and thus we observe that the positions $$ \text{a. }\varphi_1(x):=\min\big\{f_1^2(x)-f^2_2(x),0\big\} \\ \text{b. }\varphi_2(x):=\max\big\{f_1^2(x)-f_2^2(x),0\big\} $$ for any $x\in X$ define two continuous functions $\varphi_1$ and $\varphi_2$ so that we let to prove that $Z^*_{\varphi_1}$ and $Z^*_{\varphi_2}$ are two disjoint open set containg $Z_{f_1}$ and $Z_{f_2}$ respectively. So we observe that if $\varphi_1(x)$ is not zero for any $x\in X$ then $\varphi_2(x)$ is zero and vice versa so that $X$ is union of $Z_{\varphi_1}$ and of $Z_{\varphi_2}$ and thus finally $$ Z_{\varphi_1}^*\cap Z_{\varphi_2}^*=(X\setminus Z_{\varphi_1})\cap(X\setminus Z_{\varphi_2})=X\setminus(Z_{\varphi_1}\cup Z_{\varphi_2})=X\setminus X=\emptyset $$ After all, $X$ is union of $Z_{f_1}^*$ and of $Z_{f_2}^*$ so that $Z_{f_1}$ and $Z_{f_2}$ are disjoint an thus $\varphi_i$ is not zero in $Z_{f_1}$ for $i=1,2$, that is $$ Z_{f_1}\subseteq Z^*_{\varphi_i} $$

  1. So if $C$ is a closed set then $X\setminus C$ is open and thus for any $x_0\in X\setminus C$ there exists a basic open set $U$ such that $$ x\in U\subseteq X\setminus C $$ Nowe by assumption there exists a basic open set $V$ disjoint form $x$ and such that $$ U\cup V=X $$ Moreover by assumption there exists two basic disjoint open set $A$ and $V$ such that $$ X\setminus U\subseteq B\quad\text{and}\quad X\setminus V\subseteq A $$ So we observe that $$ x_0\in X\setminus V\subseteq A\quad\text{and}\quad F\subseteq X\setminus U\subseteq B $$ and moreover if $A$ and $B$ are disjoint then $X$ is union of $X\setminus A$ and $X\setminus B$. So with these achievements we put $$ f(x):=\begin{cases}0,\quad\text{if }x\in A\\ \frac 1 2,\quad\text{if }x\in (X\setminus A)\cap (X\setminus B)\\ 1,\quad\text{if }x\in B\end{cases} $$ for any $x$ and thus we let to that the last position defines a continuous function.

So as you can see I was not able to prove that the statement $4$ implies the statement $1$ so that I thought to put a specific question where additionally I asked if the proof of the first three statement are correct: in particular I tried to use the pasting lemma to prove the continuity of $f$ but unfortunately it seem not work now. So could someone help me, please?

Unitors in star-autonomous categories

Posted: 04 Jun 2022 07:23 AM PDT

1.Context
Let $(C, \otimes, I, a, l,r)$ be a monoidal category. Suppose $S: C^{op} \xrightarrow{\sim} C$ is an equivalence of categories with inverse $S'$. Assume that there are bijections $\phi_{X,Y,Z}: Hom_C(X \otimes Y,SZ) \xrightarrow{\sim} Hom_C(X, S(Y \otimes Z))$ natural in $X,Y,Z$. (This makes $C$ a star-autonomous category.) Note that we do not assume that $S$ is a monoidal equivalence. For simplicity suppose that the associator $a$ is the identity and that $S$ and $S'$ are strict inverses.

2.Question

Is the morphism $$\phi_{B,I,S'B}^{-1}\big(S(l_{S'B})\big):B \otimes I \rightarrow SS'(B)=B$$ equal to the right unitor $r_B$ for every $B \in C$?

Some ideas and follow-up questions:

  • This equality would imply that in star-autonomous categories a choice of one unitor immediately determines the other.

  • Does $\phi$ map isomorphisms to isomorphisms?
    At least for $\phi^{-1}$ I know this not to be true: Note that a star-autonomous category is monoidal closed with left internal hom $[X,Y]:=S(X \otimes S'Y)$. Even though the map $id_{S(X \otimes S'Y)}=id_{[X,Y]}$ is invertible, the morphism $\phi^{-1}_{id_{[X,Y]},X,S'Y} (id_{[X,Y]})$ is the evaluation morphism $ev_{X,Y}$ which is in general not an isomorphism.
    Maybe one can show that $\phi_{B,I,S'B}(r_B): B \rightarrow S(I \otimes S'B)$ is not in general an isomorphism. This would give a negative answer to the above question. The Yoneda lemma (covariant version) tells us something about the form of natural transformations between functors $F,Hom(-,X): C^{op} \rightarrow Set$. Can it be modified to cover natural transformations between functors $C^{op} \times C^{op} \times C^{op} \rightarrow Set$ as above?

  • Does there exist a property (for example the requirement that the unitor makes a certain diagram commute) that characterizes the left/right unitor uniquely? This would indicate a strategy for giving a positive answer: One could try to verify that $\phi_{B,I,S'B}^{-1}\big(S(l_{S'B})\big)$ satisfies the characteristic property. Such a property could only hold in special cases (for instance for star-autonomous categories) since in general monoidal categories unitors are a chosen structure not a property.

  • What is a good place to look for counterexamples to the equality?

  • I tried rigid monoidal categories – to no avail:
    Consider for instance the category $vect_{\mathbb F}$ of finite-dimensional vector spaces over a field $\mathbb F$ with usual tensor product. Let $S=S':=(-)^*$ be the duality functor. Define $\phi$ on $X, Y, Z \in vect_{\mathbb F}$ as $$\Big( \big(\phi_{X,Y,Z}(k)\big)(x) \Big)(y\otimes z):=\big(k(x\otimes y)\big)(z)$$ for $k:X \otimes Y \rightarrow Z^*$. Then $\phi$ is natural in all three components and invertible for any $X,Y,Z \in vect_{\mathbb F}$ for reasons of dimension. Denote by $\iota_X: X \rightarrow X^{**}$ the canonical identification of $X \in vect_{\mathbb F}$ with its double dual. One shows that $l_{X^*}^* \circ \iota_x=\phi_{X,\mathbb F, X^*}(\iota_x \circ r_x)$ for all $X \in vect_{\mathbb F}$ by evaluating both sides first on an element $x \in X$ and then on simple tensors in $\mathbb F \otimes X^*$.
  • Similarly, the category of quadratic algebras presented on the nLab satisfies the above equation. This is because the natural transformation $\phi$ is essentially the one for finite-dimensional vector spaces.
  • The semicartesian *-autonomous category related to Łukasiewicz logic satisfies the above equation since it is a posetal category.
  • A hopefully useful observation:
    Using the naturality of $r$ and $\phi$ I was able to show that the above statement is equivalent to the claim that the following equality $$\phi_{S(I\otimes S'B), I,S'B}\big( S(l_{S'B}^{-1})\circ r_{S(I \otimes S'B)}\big)=id_{S(I\otimes S'B)}$$ holds. Maybe this observation is of help.

Using Truth Table to conclude.

Posted: 04 Jun 2022 07:14 AM PDT

I have made this truth table for the following propositions.

While scouting and investigating the flora of an island, biologists identified Three species of plants of particular interest: Dahlia, Crocus and Snowdrop. The biologists made the following observations:

  1. If Dahlias grow in an area, Crocus will also grow in that area.
  2. Either Crocus or Snowdrops grow in an area (i.e. but they never grow together in that area).
  3. Dahlias or Snowdrops always grow in an area.

I Convert aforementioned observations into propositional formula. I Relying on a truth table, identify which plants can grow in the different areas of the Island (i.e: which plants of interest can you find in an area of the Island?)

enter image description here

The question is asking

Relying on a truth table, identify which plants can grow in the different areas of the Island (i.e: which plants of interest can you find in an area of the Island?)

I don't understand how I can use the table to see which plants can grow in the different areas of the island. Any help would be appreciated.

In how many ways can $n$ letters can be placed in $m$ $(m>n)$ addressed envelopes, such that no letter is put in the correct envelope?

Posted: 04 Jun 2022 07:27 AM PDT

I am confused on a problem. It goes as following:

In how many ways can $n$ letters can be placed in $m$ $(m>n)$ addressed envelopes, such that no letter is put in the correct envelope?

Here is the solution given in textbook (for $n=4$ and $m=5$) enter image description here

Can anyone provide any simple approach?
I tried to do it by breaking into several parts. But it didn't worked any good and I ended up making more trouble.

$F_\pi(\mathbb N)$ be periodic function set. Show $F_\pi(\mathbb N)\subseteq F(\mathbb N)$ ( Vector Space of Functions).

Posted: 04 Jun 2022 07:26 AM PDT

Question

Let $F(\mathbb N)=\{f:\mathbb{N\to R}\}$ denote the vector space of functions from $\mathbb N$ to $\mathbb R$. The addition and scalar multiplication in $F(\mathbb N)$ are:$$(f+g)(n)=f(n)+g(n) \qquad (kf)(n)=k(f(n))$$ A function $f:\mathbb{N\to R}$ is called periodic if there is some number $k$ such that $f(n)=f(n+k)\; \forall n$.

Let $F_\pi(\mathbb N)$ be the set of periodic functions. Show that $F_\pi(\mathbb N)\subseteq F(\mathbb N)$.

I know how to solve the problem, however what I'm confused on is how I would go about putting a $Zero$ Vector within this function? My best guess is that $n=0$ would be the example, and the function of $f(0)$ would be the vector that I would need to prove is within the set $F(\mathbb N)$, however I'm skeptical that I even have the right idea within this problem, which isn't helped by the fact that the example doesn't show $n=0$ as a possible occurrence within these functions. (Although the problem does state that periodic functions like this are true For All $n$, however it doesn't give n a set, so I'm not sure I can include $0$)

What are the sensitivity equations that can be integrated to find these derivatives in the Hessian matrix?

Posted: 04 Jun 2022 07:12 AM PDT

In the paper Gutenkunst et al. 2007, part of the mathematical model analysis involves computing the Hessian matrix

$$H_{j,k}^{\chi^2}=\frac{\text d^2\chi^2}{\text d\log\theta_j\,\text d\log\theta_k}$$

where $j$ and $k$ are indices to denote different model parameters $\theta$, and $\chi^2$, which quantifies the change in model behavior as parameters $\theta$ are varied from their published values $\theta^*$, is given by

$$\chi^2(\theta)=\frac{1}{2N_cN_s}\sum_{s,c}\frac{1}{T_c}\int_0^{T_c}\left[\frac{y_{s,c}(\theta,t)-y_{s,c}(\theta^*,t)}{\sigma_s}\right]^2\,\text dt$$

Where $s$ and $c$ (with corresponding numbers $N_s$ and $N_c$) just denote various species / conditions the model considers (irrelevant for this question), $T_c$ is length of the time interval in the data of condition $c$, $\sigma_s$ is a normalization factor, $y_{s,c}(\theta,t)$ is the model output.

The appendix of the paper combines the two above equations to evaluate $H^{\chi^2}$ at $\theta^*$

$$H_{j,k}^{\chi^2}=\frac{1}{N_cN_s}\sum\frac{1}{T_c\sigma_s^2}\int_0^{T_c}\frac{\text dy_{s,c}(\theta^*,t)}{\text d\log\theta_j}\frac{\text dy_{s,c}(\theta^*,t)}{\text d\log\theta_k}\,\text dt$$

The paper then states that this "is convenient because the first derivatives $\text dy_{s,c}(\theta^*,t)/ \text d \log \theta_j$ can be calculated by integrating sensitivity equations. This avoids the use of finite-difference derivatives, which are troublesome in sloppy systems." I have been trying to search around, but I cannot seem to find anything about these sensitivity equations they mention.

What are the sensitivity equations they mention here that can be integrated to determine the first derivatives in computing the Hessian matrix?

Proof of $k$th derivatives always being integers

Posted: 04 Jun 2022 07:19 AM PDT

Consider the function $f(x)$ defined such that $$f(x)=\frac{x^n(1-x)^n}{n!}$$ Then prove that the $k$th derivatives $f^{(k)}(0)$ and $f^{(k)}(1)$ are always integers. Here $n$ and $k$ are integers and $n\ge1$ and $k\ge 0$

I used binomial theorem to prove that the result holds true for $x=0$ for all $k\le n$ which was trivial. But I'm not able to prove it for the general case as such. I also tried using induction..but that didn't work as well.

Any research I do to find the answer online leads me to the proof of the irrationality of $\pi$ or $e^n$ where the above statement is taken as a starting point without the proof.

Thanks for any answers!!

Possibility of subtraction.

Posted: 04 Jun 2022 07:27 AM PDT

In Apostol's Calculus Volume I book, there is a theorem, named Possibility of Subtraction, which states:

Given $a$ and $b$, there is exactly one $x$ such that $a + x = b$. This $x$ is denoted by $b - a$. In particular, $0 - a$ is written simply $-a$ and is called the negative of $a$.

Its proof is: Given $a$ and $b$, choosy $y$ so that $a + y = 0$ and let $x = y + b$. Then, $a + x = a + (y + b) = (a + y) + b = 0 + b = b$. Therefore there is at least one $x$ such that $a + x = b$. But by (another theorem) there is at most one such $x$. Hence, there is exactly one.

QED

Now my question is that the only axioms that we are supposed to use to prove this theorem are:

  • Commutative laws
  • Associative laws
  • Distributive law
  • Existence of identity elements
  • Existence of negatives
  • Existence of reciprocals

The thing is that the axiom that says that: if $a \in R$ and $b \in R$, then $a + b \in R$ has not yet been stated. Given this, I don't understand how we can prove the existence of that $x$ without having this order axiom.

Thank you in advance.

Rounding unit vs Machine precision

Posted: 04 Jun 2022 07:01 AM PDT

I'm not sure if this question should be asked here...

For a general floating point system defined using the tuple $(\beta, t, L, U)$, where $\beta$ is the base, $t$ is the number of bits in the mantissa, $L$ is the lower bound for the exponent and $U$ is similarly the upper bound for the exponent, the rounding unit is defined as $$r = \frac{1}{2}\beta^{1 - t}$$

If I try to calculate the rounding unit for a single precision IEEE floating-point number which has 24 bits (23 explicit and 1 implicit), I obtain:

$$r = \frac{1}{2}2^{1 - 24} = \frac{1}{2}2^{-23} = 2^{-24}$$

which happens to be (using Matlab)

$$5.960464477539062 * 10^{-8}$$

which seems to be half of

eps('single')  

that is, the machine precision for single-precision floating-point numbers for Matlab. The machine precision should be the distance from one floating-point number to another, from my understanding.

If I do the same thing for double-precision, apparently the rounding unit happens to be half of the machine precision, which is the follows

eps = 2.220446049250313e-16  

Why is that?

What's the relation between machine precision and rounding unit?

I think I understood what's the rounding unit: it basically should allow us, given a real number $x$, we know that $fl(x)$ (the floating point representation of $x$) is no more far away than this unit to the actual $x$, correct?

But then what's this machine precision or epsilon?

Edit

If you look at the table in this Wikipedia article, there are two columns with the name "machine epsilon", where the values of the entries of one column seem to be half (rounding unit) of the values of the respective entries in the other column (machine precision).

https://en.wikipedia.org/wiki/Machine_epsilon

Simplified form for $\frac{\operatorname d^n}{\operatorname dx^n}\left(\frac{x}{e^x-1}\right)$?

Posted: 04 Jun 2022 07:16 AM PDT

I have found the following formula: $$\frac{\operatorname d^n}{\operatorname dx^n}\left(\frac{x}{e^x-1}\right)=(-1)^n\,\frac{n\sum\limits_{k=0}^{n}e^{kx}\sum\limits_{i=0}^{k}(-1)^i\binom{n+1}{i}(k-i)^{n-1}+x\sum\limits_{k=0}^{n}e^{kx}\sum\limits_{i=0}^{k}(-1)^i\binom{n+1}{i}(k-i)^n}{\left(e^x-1\right)^{n+1}}. $$ My proof of this formula is complicated. Can somebody find some simple proof?

No comments:

Post a Comment