Sunday, November 28, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


mathematical induction, where to start with this problem..

Posted: 28 Nov 2021 11:55 AM PST

I'm unsure where to begin answering this question, what would be the base case when applying induction on n? P(1) = ?

Prove: If there exists an injection Nm→Nn then m≤n

Consequences of the Globalization Theorem in Hirsch's Differential Topology

Posted: 28 Nov 2021 11:54 AM PST

In Hirsch's book, there is a wonderful theorem 2.11:

enter image description here

where a structure functor is simply a presheaf, and continuous means it is a sheaf (it has the gluing property). Nontrivial means there is at least one local section.

Locally extendable means, in his words:

enter image description here

As an example, if $X$ is a $C^r$ manifold, and if $\mathfrak{F}(Y), Y\subseteq X$ is the set of compatible $C^s$ differential structures on $Y$, $1\leq r <s \leq \infty$, then $(\mathfrak{F}, \mathfrak{U})$ ($\mathfrak{U}$ the open sets of $X$) form a nontrivial, continuous, locally extendable structure functor.

My question is: how can we use this technique to the problem of Finding a Retraction for a Collar Neighborhood? I copy the relevant excerpt:

enter image description here

Why for x ∈ Z if (a, b) > (0,0) then a > b

Posted: 28 Nov 2021 11:54 AM PST

thanks for your time

I have been studying the axiomatic construction of the integers set using a text book, and in one of their proofs they use that conclusion as an starting point to a demonstration, but they did not tell why they can assume that for x ∈ Z if (a, b) > (0,0) then a > b.

By the way, I get that if x=(a,b)<(0,0) then −x=(a,b)>(0,0)

References for minimizers of energy in metric spaces

Posted: 28 Nov 2021 11:52 AM PST

As the title suggests, do you know some literature, books, articles or notes, that treats the connection between geodesic in a metric space and the energy associated to the curves? In particular, I am interested in some sort of $p$-energy, like the integral of the $p$-th power of the metric derivative of a curve, or something like that, and also to the relation to the ones that minimize the legth. Is there any source that treats this particular case, if they exist?

I am confused at a step in the proof of Cauchy Criterion otherwise known as Cauchy Condensation

Posted: 28 Nov 2021 11:51 AM PST

Lemma 7.3.6 For any natural number $K$, we have $S_{2^{K+1}-1} \leq T_K \leq 2S_{2^K}$

Where $S_N := \displaystyle \sum_{n=1}^{N} a_n$ and $T_K := \displaystyle \sum_{k=0}^{K} 2^ka_{2^k}$

in this proof he uses induction, I'm confused at this part:

$S_{2^{K+1}} = S_{2^K} + \displaystyle \sum_{n=2^K+1}^{2^{K+1}} a_n \geq S_{2^K} + \displaystyle \sum_{n=2^K+1}^{2^{K+1}} a_{2^{K+1}} = S_{2^K} + 2^Ka_{2^{K+1}}$

im confused how does $\sum_{n=2^K+1}^{2^{K+1}} a_{2^{K+1}} = 2^Ka_{2^{K+1}}$?

in fact how does this notation $\sum_{n=2^K+1}^{2^{K+1}} a_{2^{K+1}}$ even make sense?

Upper bound of $(1+x)^n$ and lower bound of $(1-x)^n$

Posted: 28 Nov 2021 11:49 AM PST

Is there any generic lower bound of $(1-x)^n$ and upper bound of $(1+x)^n$, where $0<x<1$ and $n\in\mathbb{N}$?

Series expansion of $\frac{1}{\log^{\alpha}\left(1+\frac{1}{s}\right)}$?

Posted: 28 Nov 2021 11:49 AM PST

The series expansion of $\log(1+x)$ around zero: $${\displaystyle\log(1+x)=\sum _{n=1}^{\infty }(-1)^{n+1}{\frac {x^{n}}{n}}}, \quad\text{valid for}\,\, -1<x\le 1.$$ I'm trying to get the series expansion of: $$f(s)=\frac{1}{\log^{\alpha}\left(1+\frac{1}{s}\right)}=\log^{-\alpha}\left(1+\frac{1}{s}\right),$$ where $s>0$ and $0<\alpha<1$.

Definition of ZFC2 (second order logic)

Posted: 28 Nov 2021 11:45 AM PST

ZFC with 1st order logic is known.

My question may be formulated either way

  • What is the definition of ZFC2 (ZFC+second order logic)? (I assume that formulas also have quantifications of two kind over object and over predicates. Then what are some examples of statements in ZFC2 that cannot be proved in ZFC1?)
  • What are the sources with precise definitions of second-order logic?
  • Are there exist a "standard" deductive system of second-order logic? (what exactly reasearchers mean talking about second order logic)
  • What are the reasons to use the second order logic while one may have a type theory?

Question is motivated by https://mathoverflow.net/questions/409521/does-there-always-exist-a-categorical-extension-of-zfc-2-with-no-set-models and What's a good introduction to Second Order Logic (has no accepted answer)

unity roots of complex number

Posted: 28 Nov 2021 11:54 AM PST

for any complex number z which in the unit circle but not a unity root how to prove that there is a sequence of unity roots that converges to z? I know that unity root is $ e^{i\theta}$ when $\theta$ is $2\pi*$$\frac kn $

Do the inference rules of the implication-free fragment of classical logic without disjunction elimination form a complete Hilbert calculus for LP?

Posted: 28 Nov 2021 11:48 AM PST

If I take the inference rules for the implication-free fragment of classical logic (listed below) and drop the pair of rules for disjunction elimination, do I get a complete Hilbert calculus for LP (Logic of Paradox)?

One thing that I am uncertain about, though, is whether pruning away disjunction elimination without adding back a weaker rule in its place actually gives me LP instead of some other system.

I also have a higher-level motivation for this question. I like trying to come up with semantics for various propositional logics, but I'm not good at proving or disproving the completeness of a given semantics. I'm trying to pick a simple example of a nonclassical logic so I can get a feel for how to build a completeness argument.


What follows is an explanation of the question in detail and my attempt to solve the problem myself.


I will assume classical logic and ordinary mathematics in the background for the purposes of this question.

Let's define the logic of paradox using the following truth table.

and          not      F U T  -------      -----  F F F F     F T  U F U U     U U  T F U T     T F  

And let disjunction be defined as follows $a \lor b$ = $\lnot(\lnot a \land \lnot b)$.

Logic of paradox has the following sound and complete semantics. This semantics is transparently based on the semantics of Belnap's four-valued logic.

$F$ is $\{0\}$.
$T$ is $\{1\}$.
$U$ is $\{0, 1\}$.

The designated truth values are $T$ and $U$.

We can defined the semantics of the primitive operations as follows. Let $\oplus$ be xor or equivalently addition modulo 2. Let $\triangle$ be the symmetric difference. These operations are really defined in $2^{\{0, 1\}}$, but they both preserve set non-emptiness.

$[\lnot x] = \{ 1 \oplus n : n \in [x] \}$
$[x \land y] = ([x] \triangle \{1\} \cap [y] \triangle \{1\}) \triangle \{1\} $

Logic of paradox, given the signature $\land, \lor, \lnot$ has the following inference rules (which are also valid in classical logic).

$$ \frac{A}{A \lor B} \;\; \text{is disjunction introduction} $$ $$ \frac{A \land B}{A} \;\; \text{and} \;\; \frac{A \land B}{B} \;\; \text{are conjunction elimination} $$ $$ \frac{A \; \text{and} \; B}{A \land B} \;\; \text{is conjunction introduction} $$ $$ \frac{A}{\lnot\lnot A} \;\; \text{is double negation introduction} $$ $$ \frac{\lnot\lnot A}{A} \;\; \text{is double negation elimination} $$

For my axioms, I will take the following axiom schema (which I nonstandardly call the axiom schema of tautological horn clauses).

$$ \lnot x_1 \lor \lnot x_2 \lor \cdots \lor \lnot x_n \lor \cdots \lor A \;\; \text{where $A$ is in $\{x_1, x_2, \cdots, x_n \}$} $$

The classically-valid pair of rules $\frac{A \lor B \; \text{and}\; \lnot A}{B}$ and $\frac{A \lor B \;\text{and}\; \lnot B}{A}$ is not valid in LP..

I'm pretty sure that the collection of rules above with the disjunction-elimination pair of rules forms a Hilbert calculus for classical logic. The rules are certainly sound with respect to classical logic. I'm pretty sure that these inference rules also guarantee that all the connectives are truth-functional, which gives us classical logic because our background logic is classical logic.

I'm less confident that the rules above without disjunction elimination are complete with respect to the intended semantics of LP.

One property that LP has is that all the connectives are $U$-preserving, so if both arguments are $U$, the result will be $U$ as well. Intuitively, this is the thing that knocks out disjunction elimination. Knowing that the negation of a proposition is true in LP is simply not as informative as it would be in classical logic.

One thing that I am uncertain about, though, is whether pruning away disjunction elimination without adding back a weaker rule in its place actually gives me LP instead of some other system.

Why this function can't be used in fourier expansion?

Posted: 28 Nov 2021 11:40 AM PST

$$y = arccos(sin(2x))$$

I can't see why it can't be used in fourier expansion series. It seems to me that it satisfies all the Dirichlet properties:

Periodic ? Yes, $\pi $.

Continuous ? Yes

Finite number of max/mín in a period? Yes, there is one maximum and zero mínimum.

Module of the integral converges? Yes,$\frac{\pi^2}{2}$

So, what is the problem with the function? Why can't it be used for fourier expansion?

How do I compute this integral with the gauss formula and in direct way?

Posted: 28 Nov 2021 11:51 AM PST

I have the following problem:

Compute the integral $$\int_{\partial A} \langle (x,y^2,z^3),\nu\rangle dS$$ for $A=\{(x,y,z):-1\leq x,y,z\leq 1\}$ in two different ways.

I first wanted to compute it directly, before using the gauss formula, and I did the following computations enter image description here

But somehow I have a problem since my is wrong, I don't integrate over z at all in the first one. But I don't see my mistake.

Could someone take a look?

Thank you a lot.

Inequality involving log in the numerator and linear expressions in the denominator of fractions

Posted: 28 Nov 2021 11:52 AM PST

If $a$, $b$ and $c$ are real numbers in the interval $(0, 1)$, find the minimum value of $$\frac{\log_{a} b}{a-b+1}+\frac{\log_{b} c}{b-c+2}+\frac{\log_{c} a}{c-a+3}$$

I used AM-GM-HM inequality as follows:

$$\frac13 \left( \frac{\log_{a} b}{a-b+1}+\frac{\log_{b} c}{b-c+2}+\frac{\log_{c} a}{c-a+3} \right) \ge \sqrt[3]{\frac{1}{(a-b+1)(b-c+2)(c-a+3)}} \ge \frac{3}{a-b+1+b-c+2+c-a+3}=\frac12$$

Thus

$$\frac{\log_{a} b}{a-b+1}+\frac{\log_{b} c}{b-c+2}+\frac{\log_{c} a}{c-a+3} \ge \frac32$$

Is this even possible? Because I couldn't solve for the equality condition, which is

$$\frac{\log_{a} b}{a-b+1}=\frac{\log_{b} c}{b-c+2}=\frac{\log_{c} a}{c-a+3}$$

Is there another way to see that the given expression indeed can take $3/2$?

Let $f(x)\in\mathbb{F}_q[x]$. Show that if $\alpha$ is a root of $f(x)$ then $\alpha^q$ is a root of $f(x)$.

Posted: 28 Nov 2021 11:42 AM PST

Question: Let $f(x)\in\mathbb{F}_q[x]$, where $f$ is a polynomial and $\mathbb{F}_q$ is a finite field of $q$ elements. Show that if $\alpha$ is a root of $f(x)$ then $\alpha^q$ is a root of $f(x)$.

Thoughts: I know that the characteristic of a finite field is prime. So, we may write $q=p^n$ for some prime $p$ and $n\in\mathbb{N}$. I would like to show that when we take $f(\alpha^q)$, the polynomial $x^q-x=0$ splits over $\mathbb{F}_q$ (I think this would show what we want...?) So, we may write $f(x)=a_0+\dots +a_nx^n$ for $a_i\in F$. Then, since $\alpha$ is a root of $f$, we have that $f(\alpha)=a_0+\dots +a_n\alpha^n=0$. So, $f(\alpha^q)=(a_0+\dots +a_n\alpha^n)^q=a_0^q+\dots +a_n^q\alpha^{nq}$. But, now I'm stuck. Could I just say that $a_i^q$ are still just elements of $\mathbb{F}_q$, so I can write this as $a_0+\dots +a_n\alpha^{nq}$... but I don't think this gets me anywhere. Any help would be greatly appreciated! Thank you.

Solving $|1 - \ln(1 - |2x| + x)| = |1 - |3x||$

Posted: 28 Nov 2021 11:45 AM PST

I am asking you for help (that is, just verify my passages and perhaps to right my wrongs) about this exercise.

$$|1 - \ln(1 - |2x| + x)| = |1 - |3x||$$

Now here is what I have done. First of all I have to analyse the lefthand term, so starting from the inner part:

$$1 - |2x| + x = \begin{cases} 1+3x\ \ \text{for}\ \ x < 0 \\\\ 1-x\ \ \text{for}\ \ x > 0 \end{cases}$$

This being said and it's now the existence for the logarithm's turn:

$$\begin{cases} 1+3x > 0 \ \ \text{for}\ \ x > -1/3 \\\\ 1-x >0 \ \ \text{for}\ \ x < 1\end{cases}$$

Whence eventually, intersecting the solutions:

$$\ln(1 - |2x| + x) = \begin{cases} \ln(1+3x)\ \ \text{for}\ \ -1/3 < x \leq 0 \\\\ \ln(1-x)\ \ \text{for}\ \ 0\leq x < 1 \end{cases}$$

Going on, it's about the external absolute value. I thence split into two:

$$|1 - \ln(1+3x)| = \begin{cases} 1 - \ln(1+3x) \ \ \text{for} x < \frac{e-1}{3} \\\\ -1 + \ln(1+3x)\ \ \text{for}\ \ x>\frac{e-1}{3}\end{cases}$$

Yet here the second possibility NEVER happens for we had $0\leq x < 1$ where as the first one is less restrictive than what we had ($-1/3 < x \leq 0$) so we neglect it.

For the second piece:

$$|1 - \ln(1-x)| = \begin{cases} 1 - \ln(1-x)\ \ \text{for}x > 1-e \\\\ -1 + \ln(1-x)\ \ \text{for} \ \ x < 1-e\end{cases}$$

Again the second one never happens, and the first one implies a new restriction. Eventually:

$$|1 - \ln(1 - |2x|+x)| = \begin{cases} 1 - \ln(1+3x)\ \ \text{for} -1/3 < x \leq 0 \\\\ 1 - \ln(1-x) \ \ \text{for}\ \ 1-e \leq x < 1\end{cases}$$

Is this correct so far??

Clearly the second part is easier (I hope), getting in the end

$$1 - |3x| = \begin{cases} 1+3x\ \ \text{for}\ \ x<0 \\\\ 1-3x \ \ \text{for}\ \ x> 0\end{cases}$$

Which splits the external absolute values into two pieces again:

$$|1 + 3x| = \begin{cases} 1+3x\ \ \text{for}\ \ x>-1/3 \\\\ -1+3x \ \ \text{for}\ \ x< -1/3\end{cases}$$

and

$$|1 - 3x| = \begin{cases} 1-3x\ \ \text{for}\ \ x>-1/3 \\\\ -1+3x \ \ \text{for}\ \ x> 1/3\end{cases}$$

Then I guess it's all to unify, and what remains is:

$$|1 - |3x|| = \begin{cases} 1+3x\ \ \text{for}\ \ -1/3 <x<0 \\\\ 1-3x \ \ \text{for}\ \ 0<x< 1/3 \\\\ -1+3x\ \ \text{for}\ \ x>1/3 \cup x<-1/3\end{cases}$$

To get the final equations, I split the interval of the logarithm as follows:

$$1-e<x<1 = 1-e<x<-1/3 \cup -1/3<x<0 \cup 0<x<1/3 \cup 1/3<x<1$$

So the final equation reduces to the following equations

$$ \begin{cases} 1 - \ln(1+3x) = 1 + 3x\ \ \ \text{for}\ \ \ -1/3 < x < 0 \\\\ 1 - \ln(1-x) = 1 + 3x\ \ \ \text{for}\ \ \ -1/3 < x < 0 \\\\ 1 - \ln(1-x) = 1 - 3x\ \ \ \text{for}\ \ \ 0 < x < 1/3 \\\\ 1 - \ln(1-x) = -1 + 3x\ \ \ \text{for}\ \ \ 1-e < x < -1/3 \cup 1/3 < x < 1 \\\\ \end{cases} $$

FINAL EDIT

P.s. I do know there is a unique solution which is $x = 0$, it was rather intuibile. But I am demanding if, supposing we do not know anything about the solutions, the method I have used is right or not.

Proof that Dirichlet series $\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^2}=\frac{5}{2}$

Posted: 28 Nov 2021 11:55 AM PST

So I want to prove the following:

$$\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^2}=\frac{5}{2},$$ where $\omega(n)$ is the number of distinct prime factors of $n.$

I computed it to $10^{10}$ and it does seem to be slowly approaching $\frac{5}{2}.$ Also, I am aware of the following result:

$$\sum_{n=1}^{\infty}\frac{\omega(n)}{n^2}=\zeta(2)P(2),$$ where $P(2)$ is the prime zeta function.

I am not quite sure how exactly to go about this, there doesn't seem to be a way to get from the known result to the one I'm trying to solve.

Prove that an equation with two integer variables has no solution

Posted: 28 Nov 2021 11:55 AM PST

$$\dfrac{(2n)!}{((6k)n!(2^n)+1)} = n^2$$

How will you prove that it has no solution if $n$ and $k$ are integers?

Model Theory Question for Predicate Logic in van Dalen's Logic and Structure

Posted: 28 Nov 2021 11:57 AM PST

Let L be a language without identity and with at least one constant.

Let σ = ∃x1 ··· xnϕ(x1,...,xn), where σ is a formula in predicate logic.

Let Σ = {ϕ(t1,...,tn)|tᵢ closed in L}, where ϕ is quantifier free.

I want to show that:

(i) ⊨ σ ⇔ each structure A is a model of at least one sentence in Σ. (Hint: for each A, look at the substructure generated by ∅.)

(ii) Consider Σ as a set of propositions. Show that for each valuation v (in the sense of propositional logic) there is a model A such that v(ϕ(t1,...,tn)) = the interpretation of ϕ(t1,...,tn) in A, for all ϕ(t1,...tn) ∈ Σ.

This question is pg.134 #16 of Logic and Structure by Van Dalen.

My attempt to answer:

For (i), I know that ⊨ σ iff every structure A is a model of ϕ(t1,...,tn) for some t1,...,tn in A's domain. But how do I show that this means each structure A is a model of at least one sentence in Σ? I followed the hint and constructed the substructure generated by ∅, which is just a structure with its domain consisting of the constants of A and terms made by them (definition of substructure generated by a set in this linked image). I don't see how I can use this hint. I also know that ϕ(t1,...,tn) is true independent of what t1,...,tn are but how does that relate to language L?

For (ii), I'm thinking ϕ implies ϕ, so it is consistent and has a model but I feel this is wrong and I'm not sure where to correctly begin.

Help is appreciated.

Triangle Center Midpoint Proof

Posted: 28 Nov 2021 11:45 AM PST

I am working through problem 45, Stewart's calculus 6e.

Use vectors to prove that the line joining the midpoints of two sides of a triangle is parallel to the third side and half its length.

Ultimately, I am trying show the midpoint vector (line between the triangle's two midpointsenter image description here) is parallel and half the size. I have included a picture labeling everything.

The three vertices of an arbitrary triangle as vectors $\vec{A} = < a_1,a_2>$, $\vec{B} = < b_1,b_2>$, $\vec{C} = < c_1,c_2>$.

Then connect the vertices to make the lengths of the triangle: $\vec{AB} = < a_1 + b_1,a_2 + b_2>$, $\vec{BC} = < b_1 + c_1 ,b_2 + c_2>$, $\vec{CA} = < c_1 + a_1 ,c_2 + a2>$.

Next we want to define the midpoints of $AB_m = < \dfrac{a_1 + b_1}{2} , \dfrac{a_2 + b_2}{2}>$ and $BC_m = < \dfrac{b_1 + c_1}{2} , \dfrac{b_2 + c_2}{2}>$

We want to show that $\vec{AB_mBC_m}$ is half the size of $\vec{AC}$.

$\vec{AB_mBC_m} = < \dfrac{a_1 + b_1}{2} + \dfrac{b_1 + c_1}{2}, \dfrac{a_2 + b_2}{2} + \dfrac{b_2 + c_2}{2} >$

We factor the vector equation by $\dfrac{1}{2}$ :

$\vec{AB_mBC_m} = \dfrac{1}{2} < a_1 + b_1 + b_1 + c_1, a_2 + b_2 + b_2 + c_2 >$

$\vec{AB_mBC_m} = \dfrac{1}{2} < a_1 + 2b_1 + c_1, a_2 + 2b_2 + c_2 >$

But this is not equal to $\dfrac{1}{2}$ of $\vec{CA}$. Did I write something incorrectly or am I missing something?

Showing that the flow value is well defined.

Posted: 28 Nov 2021 11:41 AM PST

Let $G$ be a directed graph with edge set $E$ and $u:E \to \mathbb{R}_{\geq 0}$ a capacity function, as well as $x: E \to \mathbb{R}_{\geq 0}$ be a function. Then $x$ is called an $s-t$ flow if the following properties are satisfied:

$(a)$ $x(e) \leq u(e)$ for all $e \in E$,

$(b)$ for all $v \in V - \{s,t\}$ it holds that: $\sum_{e \in \delta^{+}(v)} x(e)-\sum_{e \in \delta^{-}(v)} x(e)=0,$ where $\delta^+(v)$ denotes the edges that are starting in $v$ and $\delta^{-}(v)$ are the edges that are ending in $v$.

The flow value $v(x)$ is now defined as $v(x):=\sum_{e \in \delta^{+}(s)} x(e)-\sum_{e \in \delta^{-}(s)} x(e)$, or equivalently $v(x):=\sum_{e \in \delta^{+}(t)} x(e)-\sum_{e \in \delta^{-}(t)} x(e)$. My question is, why are those equivalent? My idea was to sum the equation in $(b)$ over all $v \in V - \{s,t\}$, giving $$0= \sum_{v \in V -\{s,t\}}(\sum_{e \in \delta^{+}(v)} x(e)-\sum_{e \in \delta^{-}(v)} x(e)) \iff v(x)=\sum_{v \in V -\{t\}}(\sum_{e \in \delta^{+}(v)} x(e)-\sum_{e \in \delta^{-}(v)} x(e)),$$ where I am using the first definition for $v(x)$. However, I don't know how to proceed, is this last expression equal to the other definition of $v(x)$?

Is cross multiplication the right term?

Posted: 28 Nov 2021 11:47 AM PST

I always hear the term cross multiplication when it comes to equations with fractions such as $$\frac{x}{y} = \frac{a}{b}$$ where the cross multiplication is the act of multiplying $b$ and $x$, and $a$ and $y$ to get $$bx = ay.$$ What I don't get is the existence of the term at all. Isn't this just multiplication at all? This term might even get confused with the cross product.


What is the reason why cross multiplication exists even though it is just multiplication?

Factorization of $L$-functions for CM Elliptic Curves

Posted: 28 Nov 2021 11:38 AM PST

I saw recently that the $L$-functions of elliptic curves with CM can be factored as a product of simpler $L$-functions. In this question, I'd like to ask why that factorization is significant and what it tells us about the original elliptic curve. Here is the theorem in more detail:

Let $K$ be an imaginary quadratic field, and let $E$ be an elliptic curve with CM by $\mathcal{O}_K$. Suppose that $E$ is defined over a field $L$ containing $K$. Then the $L$-function of $E$ can be factored as follows: $$L(E/L, s) = L(s, \chi_{E/L}) \, L(s, \overline{\chi_{E/L}}) $$ where $\chi_{E/L}$ is the Grossencharacter of $E$ over $L$.

My question is: what is the importance of this factorization? Why is it profound / important? What "underlying truth" about elliptic curves with CM does this theorem reveal?

As an example of what I mean by the last question, here is an example where factorization of $L$-functions has a very nice intuitive meaning: given an abelian number field $K$, you can factor the Dedekind zeta function of $K$ as a product of Dirichlet $L$-functions: $$\zeta_K(s) = \prod_{\chi} L(s, \chi) $$ where $\chi$ runs over all irreducible characters of $\text{Gal }K/\mathbf{Q}$. The "underlying truth" of this theorem is that since characters are periodic functions on the integers, this theorem says that the splitting of primes in abelian extensions is governed by congruence conditions. This equality of $L$-functions is a concise way to encapsulate that fact.

Is there a similar interpretation in the case of CM elliptic curves? What concrete information about the arithmetic of $E$ is captured by the fact that you can split up the $L$-function of $E$ into the pieces $L(s, \chi_{E/L})$ and $L(s, \overline{\chi_{E/L}})$?

Counting the number of dominating rook placements in a chessboard

Posted: 28 Nov 2021 11:42 AM PST

Given a square $n \times n$ chessboard and $m$ rooks (with $m \geq \lceil{n/2}\rceil$ and $m \leq n^2$) I would like to count how many of the total $\tbinom{n^2}{m}$ possible combinations cover each "index" (more below) on the board.

This problem differs from the standard class of chess domination problems by the way covering is meant. Chessboards usually have distinct rows and columns identified by letters and numbers. In the standard chess domination problem each row and each column must be covered separately.

While in this problem the chessboard looks exactly like the normal one, the covering works differently. Here the $i^{th}$-row and the $i^{th}$-column represent the same "index". For an "index" $i$ to be covered, it is enough that one rook is placed somewhere in the $i^{th}$-row or $i^{th}$-column. While each square on the diagonal covers just one index, the other squares cover two.

When in the traditional domination problem at least $n$ rooks are needed to dominate the chessboard, here $\lceil{n/2}\rceil$ are enough.

Consider for example the following instances with $n = 4$ and $m = 2$:

Valid case                   Invalid case  |---|---|---|---|---|        |---|---|---|---|---|  |   | 1 | 2 | 3 | 4 |        |   | 1 | 2 | 3 | 4 |  |---|---|---|---|---|        |---|---|---|---|---|  | 1 |   | A |   |   |        | 1 |   | A |   |   |  |---|---|---|---|---|        |---|---|---|---|---|  | 2 |   |   |   |   |        | 2 |   |   | B |   |  |---|---|---|---|---|        |---|---|---|---|---|  | 3 |   |   |   | B |        | 3 |   |   |   |   |  |---|---|---|---|---|        |---|---|---|---|---|  | 4 |   |   |   |   |        | 4 |   |   |   |   |  |---|---|---|---|---|        |---|---|---|---|---|  

The placement on the left is valid since $A$ covers indexes $1$ and $2$ and rook $B$ covers indexes $3$ and $4$. All the $4$ indexes are covered by $A$ or $B$.

The placement on the left is invalid since the index $4$ in not covered by any rook.

When $m$ grows in size, indexes can be covered by more than one rook.

Given the explanation above, I would like to know if a polynomial algorithm/formula that given $n$ and $m$ computes how many of the $\tbinom{n^2}{m}$ possible rook placements are valid exists.

The exponential algorithm is trivial. Just iterate through all the possible $\tbinom{n^2}{m}$ placements and compute for each of them whether the placement is valid or not.

Defining the Derivative using Internal Set Theory (Non Standard Analysis)

Posted: 28 Nov 2021 11:47 AM PST

As an engineer, I have found NSA (Non Standard Analysis) to be much closer to our intuition than traditional calculus. Since there are basically two approaches to NSA, one using Hyperreals and another using Internal Set Theory (IST) I decided to take a look at both approaches. With Hyperreals, we can say that a function $f: \mathbb{R} \rightarrow \mathbb{R}$ has in a real point $a$ the derivative $m$ if for all hyperreal infinitesimal $\Delta x$

$ st(\frac{f(a + \Delta x) - f(a)}{\Delta x}) = m $.

I'm actually reading Alain Roberts's book, which employs the IST approach, and it doesn't actually define the derivative of $f$ by means of NSA alone. What it does is to keep the classical definition of differentiability (i.e: using limits) and to define the new concept of S-differentiability by saying that $f$ is S-differentiable in a point $a$ if there is a standard $m$ with

$\frac{f(x) - f(a)}{x - a} \approx m$ for all $x \approx a$.

The problem with this definition is that it's only equivalent to differentiability when $f$ and $a$ are both standard. Although differentiability could be implicitly characterized by S-differentiability by defining that $f$ is differentiable in $a$ if $(f,a)$ belongs to $\{(f,a): f\text{ S-differentiable at $a$}\}^{S}$ as the book suggests, it's much more artificial and away from our intuition than the definition using hyperreals or even the classical definition itself. Futhermore, it only defines the relation of differentiability, and not the operation of taking a derivative. My question is then the following: How can we define the derivative (or differentiability) in IST in a way that preservers intuition and doesn't fall back to classical definitions ?

Cocountable Topology is not Hausdorff.

Posted: 28 Nov 2021 11:46 AM PST

This question has been answered already, this is an attempt to rephrase it.

Let $(X,\tau)$ be a cocountable topology with $X$ uncountable.

Show that $(X, \tau)$ is not Hausdorff.

Let $r\not=s$, and $r\in U$ and $s \in V$, where $U, V$ are open and not empty.

Assume $U \cap V = \emptyset$.

$X\setminus (U \cap V) =$

$(X\setminus U)\cup (X\setminus V)$.

RHS is a union of countable sets, hence countable.

LHS is $X\setminus\emptyset=X$ uncountable, a contradiction.

Perhaps nitpicking :

$\emptyset$ is open and an element of the topological space, but $X\setminus\emptyset$ is not countable.

How do you go about this?

given function $g(X) = \operatorname{tr}(AXB)$, compute derivative wrt $X$, where $A,X,B$ are matrices.

Posted: 28 Nov 2021 11:51 AM PST

I do not want a solution, i just got a question about an equation of a derivative of a trace? I read in a book that i can apply this rule :enter image description here

and my question was if i can write $\frac{ \partial} {\partial X} tr(AXB)$ = $tr(\frac{ \partial AXB} {\partial X})$ ?

If i compute now the derivative of it then i get $tr(B^T⊗A)$ ?

Showing that $\displaystyle\limsup_{n\to\infty}x_n=\sup\{\text{cluster points of $\{x_n\}_{n=1}^\infty$}\}$

Posted: 28 Nov 2021 11:37 AM PST

Let $\{x_n\}_{n=1}^\infty$ be a sequence in $\mathbb R$. I used to follow the definition that $$\limsup_{n\to\infty}x_n=\lim_{n\to\infty}\sup_{k\geq n}x_k$$ with limits understood in the sense of the extended real number system, but recently another definition has come into my sight:

If the sequence is not bounded above, we define $\limsup_{n\to\infty}x_n=\infty$. Otherwise, we define $$\limsup_{n\to\infty}x_n=\begin{cases} \sup(c\ell)&,c\ell\neq\emptyset\\ -\infty&,c\ell=\emptyset, \end{cases}$$ where $c\ell$ is the set of all cluster points of $\{x_n\}_{n=1}^\infty$. A cluster point of $\{x_n\}_{n=1}^\infty$ is a point in $\mathbb R$ whose open balls all contain infinitely many terms (some of them may repeat) of $\{x_n\}_{n=1}^\infty$.

I would like to show that this definition implies the old one:

If the sequence is not bounded above, then $\forall M\in\mathbb R$, $\exists N\in\mathbb N$ s.t. $x_N>M$. In this case, $\sup_{k\geq n}x_k=\infty$ for each $n\in\mathbb N$, implying $\{\sup_{k\geq n}x_k\}_{n=1}^\infty\searrow\infty$. Suppose instead that the sequence is bounded above. When the sequence has no cluster points, one can prove that $\lim_{n\to\infty}x_n=-\infty$, which in turn gives us $$\lim_{n\to\infty}\sup_{k\geq n}x_k=-\infty.$$ The proof has been doing well so far, but I have no idea how to show that if $c\ell\neq\emptyset$, then $$\lim_{n\to\infty}\sup_{k\geq n}x_k=\sup(c\ell).$$ The supremum of the cluster points? What is it really? Thank you.

Numerical Integral of Complete Elliptic Integral

Posted: 28 Nov 2021 11:45 AM PST

I'm looking to numerically evaluate an integral of the form $$I=\int_0^1rK(r)f(r)\,dr$$ where $f(r)$ is a smooth function known at a set of grid points $\{r_i\}$ with error $O(\Delta r^2)$ ($\Delta r\sim10^{-3}$) and $K$ is the complete elliptic integral of the first kind. A first attempt would be to simply use the trapezoidal rule, but $K(r)$ is singular at $r=1$ so this is poorly conditioned as the grid becomes very fine. One approach I have seen is to use an asymptotic expansion of $K(r)\approx-\log\sqrt{1-r^2}$ for $r\approx 1$ and then taking $f$ to be constant in some small interval $(1-\epsilon,1)$ (chosen to fall on a grid point) so that $$\int_{1-\epsilon}^1rK(r)f(r)\,dr\approx f(1)\int_{1-\epsilon}^1-\frac{r}{2}\log(1-r^2)\,dr=\frac{f(1)}{4}\left[1+\epsilon(2-\epsilon)\log(1-(1-\epsilon)^2)+(1-\epsilon)^2\right]$$ (if I worked out the integral correctly). This approach seems very sensitive to the value of $\epsilon$ chosen, and it further assumes $f$ is constant on this interval which does not preserve the second-order accuracy. Is there a way to approach this integral that does not require an arbitrary choice of $\epsilon$, and further can compute $I$ with $O(\Delta r^2)$ error?

How to determine the bounding curve of a moving circular sector?

Posted: 28 Nov 2021 11:42 AM PST

Preamble: Consider the curve $\mathcal{C}$ formed by the segments $CD$, $DE$, $EB$ and the arc $BC$ presented below: here

Next, let $F$ be a "perspective" point and $l$ a line segment whose other end point is $A$. Suppose that we draw similar curves for each point $T$on the line segment $l$, such that the bottom segment $DE$ in the picture is orthogonal to the ray originating from $F$ and passing through $T$. Below is my best attempt to graph that this looks like at the end points of a segment from $(0, 0)$ to $(2, 0)$. (N.B., the distances are not exact in the picture, as I don't know how to use GeoGebra well).enter image description here

Question: I would like to determine the bounding curve for the shape which is obtained by the moving/painting area. That is, what is the minimal curve $\mathcal{M}$ which contains all of the "moving" curves $\mathcal{C}$?

It is quite clear that portion of the bounding curve are given by the segments $CD$, $DA$ of the leftmost area, and by the segments $AE$, $EB$ of the righmost area. What I'm struggling with is how to determine the bounding shape of the moving arc given that the line $l$ (the line segment which curve $\mathcal{C}$ moves along) and the perspective point $F$ can be be arbitary?

Finding $\int \frac{d x}{x+\sqrt{1-x^{2}}}$.

Posted: 28 Nov 2021 11:47 AM PST

I have to calculate the following integral: $$ \int \frac{d x}{x+\sqrt{1-x^{2}}} $$

An attempt:$$ \begin{aligned} \int \frac{d x}{x+\sqrt{1-x^{2}}} & \stackrel{x=\sin t}{=} \int \frac{\cos t}{\sin t+\cos t} d t \\ &=\int \frac{\cos t(\cos t-\sin t)}{\cos 2 t} d t \end{aligned} $$ I find the solution is $$\frac{\ln{\left(x + \sqrt{1 - x^{2}} \right)}}{2} + \frac{\sin^{-1}{\left(x \right)}}{2}+C$$ How can I get this without trigonometric substitution?

No comments:

Post a Comment