Sunday, May 9, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Convergence of $ I=\int_{0}^{+\infty} \frac{1}{\sqrt{t}} \cdot \sin \left(t+\frac{1}{t}\right) dt $

Posted: 09 May 2021 08:02 PM PDT

What have i done? I is improper at 0 and $+\infty$. The function inside is continuous on (0 ;$+\infty$) and then integrable on any closed interval contained in (0 ;$+\infty$). $$ I=\int_{0}^{1} \frac{1}{\sqrt{t}} \sin \left(t+\frac{1}{t}\right) \cdot d t+\int_{1}^{+\infty} \frac{1}{\sqrt{t}} \sin \left(t+\frac{1}{t}\right) d t $$ $$ \begin{array}{l} \text { As }\left|\frac{\sin \left(t+\frac{1}{t}\right) \mid}{\sqrt{t}}\right| \leqslant \frac{1}{t} \text { and }\left(\int_{0}^{1} \frac{1}{\sqrt{t}} \cdot d t\right) \text { converges } , \\ \text { the comparison test states that } \int_{0}^{1} \frac{\sin \left(t+\frac{1}{t}\right)}{\sqrt{t}} \text { . dt is } \\ \text { absolutely convergent. Thus } I_{1}=\int_{0}^{1} \frac{\sin \left(t+\frac{1}{t}\right)}{\sqrt{t}} \text { . } d t \text { converge. } \end{array} $$ $$ \begin{array}{l} I_{2}=\int_{1}^{+\infty} \frac{1}{\sqrt{t}} \cdot \sin \left(t+\frac{1}{t}\right) \cdot d t \\ I_{2}=\int_{1}^{+\infty}\left(\frac{1}{\sqrt{t}} \sin (t) \cos \left(\frac{1}{t}\right)+\frac{1}{\sqrt{t}} \sin \left(\frac{1}{t}\right) \cos (t)\right) \cdot \text { dt } \\ \text { The taylor expansion of } \cos \left(\frac{1}{t}\right) \text { and } \sin \left(\frac{1}{t}\right) \\ \text { at first order about } +\infty \text { gives: } \\ \cos \left(\frac{1}{t}\right)=1-\frac{1}{2 \cdot t^{2}} \\ \sin \left(\frac{1}{t}\right)=\frac{1}{t} \\ \text { Thus, } \\ I_{2}=\int_{1}^{+\infty}\left(\frac{1}{\sqrt{t}} \cdot \sin (t) \cdot\left(1-\frac{1}{2 t^{2}}\right)+\frac{1}{\sqrt{t}} \cos (t) \cdot \frac{1}{t}\right) \cdot d t \\ I_{2}=\int_{1}^{+\infty}\left(\frac{\sin (t)}{\sqrt{t}}-\frac{\sin (t)}{2 t^{5 / 2}}+\frac{\cos (t)}{t^{3 / 2}}\right) \cdot d t \end{array} $$ For the same previous reasons (when studying $ I_{1}$ convergence), $ I_{2}$ converges, as sum of convergent integrals. My questions: Is my method correct? Is there any other nice way to study the convergence of I? I stopped the taylor expansion of cos(1/x) and sin(1/x) at the first order; but will it always work (stopping it at the first order)?can i have a counterexample? Thanks in advance!

Proving the "extended version" of Markov's inequality

Posted: 09 May 2021 08:07 PM PDT

I am trying to prove the "extended version" of Markov's inequality:

Suppose $(X, \mathcal S, \mu)$ is a measure space and $h: X \to \mathbb R$ is an $\mathcal S$-measurable function. Prove that \begin{align} \mu\left(\left\{ x\in X: \left|h(x) - \int h\; d\mu\right|\ge c \right\}\right) \le \frac{1}{c^p} \left(\int |h|^p \; d\mu \right) \end{align} for any positive numbers $c, p$.

I have the proof for the $p=1$ case. I want to use induction to prove this for any positive number $p$. Of course, the first step in the induction process would be assuming that

\begin{align} \mu\left(\left\{ x\in X: \left|h(x) - \int h\; d\mu\right|\ge c \right\}\right) \le \frac{1}{c^p} \left(\int |h|^p \; d\mu \right) \end{align} is true. Then, we would need to show that \begin{align} \mu\left(\left\{ x\in X: \left|h(x) - \int h\; d\mu\right|\ge c \right\}\right) \le \frac{1}{c^{p+1}} \left(\int |h|^{p+1} \; d\mu \right) \end{align}

How can the last inequality be proven given the induction hypothesis?

Weaken Dickson's conjecture

Posted: 09 May 2021 07:55 PM PDT

I'm interesting in Dickson's conjecture. But it is hard to consider. I want some `weaken' Dickson's conjecture. It is the following statement.

Let $a$ and $b$ be constant integers. $S$ is a set of infinite primes. Then, the following statements are equivlant.

  1. There are infinitely many integers $n$ such that $n+a$ and $n+b$ have only prime factors in $S$
  2. There is an integer $n$ such that $n+a$ and $n+b$ have only prime factors in $S$

Is there a result for this weaker statement?

What is the most efficient way to step on every blade of grass in a circular field?

Posted: 09 May 2021 07:54 PM PDT

You have a circular field of radius 1 meter and you really hate grass. So you make it your life's goal to step upon every single blade of grass you come across. What is the shortest path you can take within and on the circle to ensure that every single blade of grass is stepped upon?

Why $\frac{1}{-j(\omega-\omega_0)}e^{-j(\omega-\omega_0)t}\bigg |_{-\infty}^{\infty}=0$ when $\omega \neq \omega_0$?

Posted: 09 May 2021 08:09 PM PDT

In the class, my professor said the following

$$\int_{-\infty}^{\infty} e^{-j(\omega-\omega_0)t} dt=\frac{1}{-j(\omega-\omega_0)}e^{-j(\omega-\omega_0)t}\bigg |_{-\infty}^{\infty}=0,$$ by Euler's formula when $\omega \neq \omega_0$.

I am a bit confused about this! How does the Euler's formula come in when dealing with $\infty$.

Verifying Proof: If $L$ is a poset with a bottom element, and $\exists \sup(S)$ for every subset $S \subset L$, then $L$ is a complete lattice

Posted: 09 May 2021 08:05 PM PDT

I am currently working through Kaplansky for self study and was hoping to get some feedback on this proof. I would appreciate comments on clarity and legability as well.

Claim: If $L$ is a poset with a bottom element, and $\exists \sup(S)$ for every subset $S \subset L$, then $L$ is a complete lattice.

Proof: Let $0 \in L$ be the bottom element and consider some subset $S \subset L$. We construct the set of lower bounds: $$ S' = \{s \in S \mid s \leq \sup(S) \; \text{and} \; s \neq \sup(S)\}. $$

Clearly, $0 \in S'$ since $\forall l \in L$ we have $0 \leq l \implies \inf(S') = 0$. Additionally, notice that $\exists \sup(S')$ by virtue of the fact that every subset of $L$ has a suprememum. Because $S'$ consists of all lower bounds of $S$, it must be the case that $\sup(S') = \inf(S)$, since any lower bound of $S$ must be in $S'$, and $s' \leq \sup(S')$ $\forall s' \in S'$. Thus, $S$ has an infimum.

Since every subset of $L$ has an $\inf$ and $\sup$, it must be the case that $\forall a,b \in L$, $\exists \sup(\{a,b\})$ and $\exists \inf(\{a,b\})$. Thus, $L$ is a lattice.

$\therefore L$ is a complete lattice.

$\blacksquare$

Integral curve on complete surfaces

Posted: 09 May 2021 08:04 PM PDT

I proved the following result

"Prove that if $V$ is a differentiable vector field oon a compact surface $S$ and $\alpha(t)$ is the maximal trajectory of $V$ with $\alpha(0)=p\in S$, then $\alpha(t)$ is defined for all $t\in\mathbb{R}.$"

For every $q\in S$, there exist a neighborhood $B$ of $q$ and an interval $(-\varepsilon,\varepsilon)$ such that the trayectory $\alpha(t)$, with $\alpha(0)=q$, is defined in $(-\varepsilon,\varepsilon)$. By compactness, it's possible to cover $S$ with a finite numbers of such neighborhoods, ${B_j}'s$. Let $\varepsilon_0=\min\{\varepsilon_j\}$. If $\alpha(t)$ is defined for $t<t_0$ and isn't defined for $t_0$, we take $t_1\in (0,t_0)$, with $|t_0-t_1|<\varepsilon/3$. Consider the trayectory $\gamma(t)$ of $V$, with $\gamma(0)=\alpha(t_1)$, which leads us to a contradiction.

My question: Is the result valid if we change complete surface instead of compact surface? And if this is true (which I believe). How could it be demonstrated? I have tried but have not been successful.

Proving that if $f$ is nonnegative and increasing, then the maximal function of $f$ is also increasing.

Posted: 09 May 2021 08:01 PM PDT

The answer to this question suggests that if a function $f$ is nonnegative and increasing, then the Hardy–Littlewood maximal function of $f$ is also increasing.

The maximal function is defined as $$Mf(x)=\sup_{r>0}\frac{1}{2r}\int^{x+r}_{x-r}|f|~d\lambda$$ where $x\in \mathbb{R}$ and $\lambda$ is the Lebesgue.

The proof that the said answer provides is extremely intuitive and brief. Essentially, it is based on the assertion that if $y < x$, then for each $r > 0$,

$$\int_{y-r}^{y+r} f(t)\; dt \le \int_{x-r}^{x+r} f(t)\; dt \qquad(*)$$

Of course, $(*)$ is obvious in the case of Riemann integrals, but I am interested in rigorously proving $(*)$ since the integration is w.r.t. the Lebesgue measure. How can $(*)$ be proved given the hypothesis that $f$ is a nonnegative increasing function?

Surjective function combinatorics

Posted: 09 May 2021 08:06 PM PDT

And here I come again. I think I'm having a math block these days with things I shouldn't have problems with. Anyway... Having $\#A = n$ (the number of elements of A is n), and $\#B = p$, and ($n \geq p$), how many surjective functions should I have?

I was developing the following reasoning:
$\sum _{k=1}^{p}x_{k}=n$ is the problem associated with solving n arrows inside p boxes, since we have to complete the counterdomain, I should transform this problem with the change of variables $y_{k}=x_{k}-1$.
This gets me to $\sum _{k=1}^{p}y_{k}=(n-p)$, and the result $\frac{(n-1)!}{(n-p)!(p-1)!}=C_{n-1}^{n-p}$.
I tried some inclusion-exclusion with it, but I can't construct a sequence...
The book says the answer is $\sum _{k=0}^{p}(-1)^{k}C_{p}^{k}(p-k)^n$
Obs.: $C_{n}^{p}=\frac{n!}{(n-p)!(p)!}$

If we have a $\sigma$-algebra on $\mathbb{R}$, are all subsets of it Lebesgue measurable?

Posted: 09 May 2021 07:30 PM PDT

Given a $\sigma$-algebra, say $\mathcal{S}$ on the reals $\mathbb{R}$, is it true that every subset $E\in \mathcal{S}$ is Lebesgue measurable?

Efficient way to generate sequence fits multiple constraints

Posted: 09 May 2021 08:03 PM PDT

Given two numbers $a, b \in \mathbb{R}, a \neq b$, and two constants $A, B \in \mathbb{Z}$.

My goal is to generate a set $G$ with $n$ distinct elements that contains both $a, b$ and satisfies the following constraint. ($n$ is not fixed, can be arbitrary)

Denote the number of elements in $G$ that are larger than $a$ or $b$ as $L_a$ or $L_b$, the number of elements in $G$ that are smaller than $a$ or $b$ as $S_a$ or $S_b$.

The constraints are $S_a - L_a = A$ and $S_b - L_b = B$.

Is it possible to achieve it in polynomial time?

Determine whether the mappings are topologically conjugate

Posted: 09 May 2021 07:10 PM PDT

I'm trying to solve this problem but stuck in the very beginning. I have no idea how to start and think about this problementer image description here. Any kind of help would be highly appreciated!

How do I determine the time it takes to accelerate if I only know the distance traveled and what the amount of acceleration is?

Posted: 09 May 2021 08:08 PM PDT

I am currently losing as I don't really have any clue on how to solve this calc problem as there is nothing we covered in class like this:

Determine how many seconds it would take for a car to accelerate uniformly from 0 to 60 miles per hour using $\frac{1}{20}$th of a mile long track. Give your answer in seconds.

What I've determined so far is that I somehow need to find the rate of acceleration, and then use integration to find the number of seconds (this could be wrong, I honestly don't know). What I'm thinking this problem looks like is something like this: $$\int_0^{60}a(t)dt = \frac{1}{20}$$ To my understanding what I wrote here says that from 0 to 60 miles per hour, $\frac{1}{20}$th of a mile has been traveled. If I can somehow determine $a(t)$ then I can somehow find $t$. Sorry if this isn't making a lot of sense, I'm trying to best to show what my thought process is on this problem, but I really don't know how to solve something like this, it feels like I'm missing a lot of information.

In case someone was curious I am in Calculus I in my first year of college.

Also, just to be clear, I'm not really asking for an answer to the question, maybe just some insight on how I can approach it as I'm not sure my thinking is correct.

Equivalence class of composition of a loop and a homeomorphism

Posted: 09 May 2021 07:15 PM PDT

How do I solve the following: Let $X$ be a topological space with base point $x_0$ and $f$ be a loop at $x_0$. Let $h:[0,1] \rightarrow [0,1]$ be a homeomorphism. Prove that either $[f \circ h] = [f]$ or $[f \circ h] = [f]^{-1}$. I tried showing that there is a path homotopy between $f\circ h$ and $f$ or $f^{-1}$ but that did not work out.

What is the graph of $y=1+1/2+1/3+\dots+1/x$?

Posted: 09 May 2021 08:11 PM PDT

I was looking in the expression $1/1+1/2+1/3+1/4+\dots+1/x$, and then I thought, what would be the graph for this?

$1/E(x)$ vs. $E(1/x)$

Posted: 09 May 2021 07:18 PM PDT

Question concerning MVUE and geometric distribution, trying to apply Rao-Blackwell Theorem here. We know that the geometric distribution is a regular exponential class with $$Y = \sum x$$ as our sufficient and complete statistic. However $$ E(Y) = \frac{n(1- \theta)}{\theta} $$ is not an unbiased estimator because it does not equal theta. Since 1/E(X) is usually not E(1/X), I tried $$ E(\frac{1}{y}) = \sum\frac{1}{y}\theta(1-\theta)^\frac{1}{y} $$ but that's where I got royally stuck. Is there a way around this summation so that I can get $$ E(Y) = \theta $$

Why do we need to use Opposite categories/Contravariant functions

Posted: 09 May 2021 07:18 PM PDT

I know this is obviously a really dumb question apologies, but I'm currently trying to learn Category Theory and I'm struggling to find a resource/textbook that doesn't just define these concepts in a really unmotivated way and then move on.

Looking at an opposite category doesn't seem to add any new or interesting structure to the things you're studying, and the correspondence between morphisms is so natural and trivial that it seems really unclear what you can actually "do" with the opposite category that you can't just do with C. Contravariant functors then just seem to be less intuitive ways to talk about basically the same stuff

Clearly this is just me being dumb/not finding the right material yet but can someone point out what I'm missing? Even just an example where thinking in terms of opposite categories/contravariant functors is actually much more natural/illuminating?

Explicit distribution of the d-dimensional Ornstein-Uhlenbeck process

Posted: 09 May 2021 07:45 PM PDT

I am trying to compute the distribution of the d-dimensional Ornstein-Uhlenbeck process $dX_t^{\epsilon}=-QX_t^{\epsilon}dt + \epsilon dB_t$, $X_0^{\epsilon}=x \in \mathbb{R}$. I know that, by Ito's formula, you can obtain its strong solution as $$X_t^\epsilon(x)=e^{-Qt}x+\epsilon \int_0^te^{-Q(t-s)}dBs.$$

Where $\int_0^te^{-Q(t-s)}dBs$ is a Wiener integral. For the unidimensional case, I've seen that the Wiener integral distribute as $\int_0^t g(s)dBs \sim \mathcal{N}(0, \int_0^t g^2(s) ds)$ and I will like to use this to determine the distribution of the d-dimensional case. It is clear that the mean vector of $X_t^\epsilon(x)$ should be $e^{-Qt}x$, however I am struggling with finding the covariance of the d-dimensional Wiener integral associated.

Any references or ideas on how to proceed are very much welcomed.

Edit: In this case, I am assuming that $Q$ is a $d\times d$ deterministic matrix, which is exponentially stable (the real components of its eigenvalues are all positive).

Prove $\sum_{k=1}^{n-1}\frac{(n-k)^2}{2k} \geq \frac{n^2\log(n)}{8}$

Posted: 09 May 2021 08:09 PM PDT

As the title says, prove $$\sum_{k=1}^{n-1}\frac{(n-k)^2}{2k} \geq \frac{n^2\log(n)}{8},$$ for $n>1$. This inequality is from Erdős, "Problems and results on the theory of interpolation". I, Lemma 3.

My attempt: since $H_{n-1}=\sum_{k=1}^{n-1}\frac{1}{k} > \int_{1}^n\frac{1}{t}dt = \log(n),$ we then have $$ \sum_{k=1}^{n-1}\frac{(n-k)^2}{2k} = \sum_{k=1}^{n-1}\frac{n^2 - 2kn + k^2}{2k} = \frac{n^2}{2}H_{n-1} - n(n-1) + \frac{n(n-1)}{4} > \frac{n^2\log(n)}{2} - \frac{3n(n-1)}{4}. $$

Am I missing something here?

Any theory deals with spaces that mixes the size of the tuples?

Posted: 09 May 2021 07:31 PM PDT

Specifically, I am looking for for a theory that studies the properties of the set of a finite tuples of a set $X$.

Suppose for instance the set of the reals, A 2-d space can by constructed as the product $R \times R$.

But here, I am looking for a set that contains tuples of various sizes, for instance:

$$ R=\bigcup_{i=0}^n\mathbb{R}^i $$

or

$$ N=\bigcup_{i=0}^n\mathbb{N}^i $$

Are these tolopological spaces. Can one consider a path, or a metric within these spaces. Any articles (topology, metric, or otherwise - doesn't have to be geometric just trying to learn all I can about these structures) which tackles these?

Pullback on $\textbf{Set}$

Posted: 09 May 2021 07:25 PM PDT

Let $f:X \rightarrow B \leftarrow Y:g$ be a diagram in Set. If $Y$ is a $B$-indexed set $\{G_{b}\}$, the pullback for such diagram is the $X$-indexed set $P := \{G_{fx}\}_{x \in X}$.

My question is: What are the morphisms $g': P \rightarrow X, f':P \rightarrow Y$ that make $$\require{AMScd} \begin{CD} P @>f'>>Y\\ @Vg'VV @VVgV\\ X@>>f>B \end{CD}$$a commutative square?

Edit: Example taken from Mac Lane's book, Sheaves in geometry and logic, section 2 of chapter 1 (Pullbacks), pages 29 and 30.

Is it possible to have an S³ smooth manifold of constant curvature?

Posted: 09 May 2021 07:42 PM PDT

Note: I am a mathematics enthusiast and may not be able to respond to any queries at the level they may be written at.

I believe it is possible to create an (S¹)³ space that has constant curvature at all points, as we could make the dimensions form Clifford Toruses pair-wise.

However, S³ is quite different, especially if I further require it to have differentiation defined within it (I believe that is what "smooth" means?), and my knowledge doesn't extend that far.

Is it possible that an S³ space be a smooth manifold with constant curvature (i.e. identical to an Euclidean space).

Could anyone be kind enough to satisfy my curiosity?

Thank you.

Are the finite subsets of $\mathcal{P}(\mathbb{N})$, finite?

Posted: 09 May 2021 07:51 PM PDT

I am trying to split up the finite subsets of $\mathcal{P}(\mathbb{N})$ into two disjoint groups $X \sqcup Y$ so that no two neighbouring sets are in the same group. (we introduce the term 'neighbouring': Two sets $A,B \subseteq \mathbb{N}$ are neighbouring, if $A$ is obtainable by adding an element to $B$, i.e. $A=B \cup \{c\}$ for a $c \not\in B$ or the other way around).

In my proof, I am trying to use the compactness theorem of propositional logic to find a partitioning for the finite subsets of $\mathcal{P}(\mathbb{N})$, to deduce the fact that there exists a partitioning for $\mathcal{P}(\mathbb{N})$ as a whole.

Thus far, I could obtain a partition for the finite subsets $T:= \{1,...n\} \in \mathcal{P}(\mathbb{N})$ with $$X':= All \ subsets \ of \ T \ with \ an \ odd \ number \ of \ elements \\ Y':= All \ subsets \ of \ T \ with \ an \ even \ number \ of \ elements$$ so that $\mathcal{P}(T)=X' \sqcup Y'$.

Now, theoretically, from that partitioning, with the compactness theorem it should also follow that $\mathcal{P}(\mathbb{N})=X \sqcup Y$. But my two main concerns are:

  1. The compactness theorem is merely applicable in the sense of considering the finite subsets of the power set. But what if those subsets are, in itself, infinite? I.e., what if the subsets themselves do not have a finite number of elements.
  2. In light of 1, how would we be able to measure the "length" of those subsets? For instance, $\{even \ numbers\}$ and $\{1, \ even \ numbers\}$ would be neighbouring according to our definition. But the "length" of those sets would neither be odd or even, and would thereby not fit into either partition.

In this thread somebody has already tried to construct this partitioning, but used algebraic definitions. I feel like my proof thus far with the compactness theorem is right, but that last tiny bit is missing, or maybe I am overreacting and it is just fine as it is.

(In addition: My understanding of how the compactness theorem would apply here.

Our formula $\Phi$ is satisfied iff there exists a partitioning $\mathcal{P}(\mathbb{N})=X \sqcup Y$.

Now, applying the compactness theorem, we merely take the finite subsets of $\mathcal{P}(\mathbb{N})$, which we have definied as $\mathcal{P}(T)$, so that we have a partitioning for the finite subsets $\mathcal{P}(T)=X' \sqcup Y'$. Which is nothing else than a partition for the finite subsets (or, formulae) of our initial formula $\Phi$. And by definition of the compactness theorem, if every finite subset of the formula $\Phi$ is satisfied (we satisfy them by allocating them to either one of the disjunct groups), and we thus have a model for those formulae, we can conclude that it is also a model for $\Phi$ itself.)

Thank you so much in advance for any help! Lin

What is the difference between Universal-Existential Statement and Existential-Universal Statement?

Posted: 09 May 2021 07:56 PM PDT

Here is my understanding on Universal-Existential Statement:

For a square numbers in a set, there exists a positive and a negative integer square root.

Is it the right way to write the statement?

How can this be modified for Existential Universal Statement?

Relationship between local non-integrability of $1/f$ and one-sided differentiability of $f$ at a zero

Posted: 09 May 2021 07:42 PM PDT

Let $a,c,b \in \mathbb{R}$ with $a<c<b$.

Let $\mathcal{F}$ be the set of (well-defined pointwise; i.e. not modulo almost-everywhere equality) continuous functions $f : [a,b]\rightarrow \mathbb{R}$ such that $f(c) = 0$, and such that $f$ has no zeros other than $c$. (The last condition is just for simplicity. (Though I guess unfortunately this makes $\mathcal{F}$ not closed under linear or even convex combinations.))

  • Let $\mathcal{A}$ be the set of all $f\in \mathcal{F}$ for which $\frac{1}{f} \notin L^1([a,b])$, i.e. $$ \int_a^b \frac{dt}{|f(t)|} = \infty\,. $$

  • Next, let $\mathcal{B}$ be the set of all $f \in \mathcal{F}$ for which the left- and right-sided derivatives of $f$ at $c$ exist; i.e. the one-sided limits $$ \lim_{h \to 0^{\pm}} \frac{f(c+h) - f(c)}{h} $$ exist for each sign $\pm$.


If I'm not mistaken, the answer posted on my earlier question If $f(x)$ is differentiable, can $1/f$ be locally integrable at a zero of $f$? [ credit to the answerer! ] shows that $$ \mathcal{B} \subseteq \mathcal{A} \,. $$

Here my question is whether the converse $\mathcal{A} \subseteq \mathcal{B}$ is true? I haven't managed to show this, nor have I found a counter-example.


EDIT: I changed the definition of $\mathcal{B}$ (and the title of the question) after I noticed a flaw in the original definition.

A result about Fermat's numbers. Is my proof correct? Is that result useful ?Can we generalize that result?

Posted: 09 May 2021 07:20 PM PDT

Let $n$ be an integer and $F_n:=2^{2^n}+1$. $$n=2,3,4:F_n\equiv17\pmod{30}$$

$\mathbf{Result:}\;n>1:F_n\equiv17\pmod{30}$

$\mathbf{Proof:}$ Suppose $F_n - 1\equiv16\pmod{30}$. Then $2^{2^{n+1}}=({2^{2^n}})^2\equiv16^2\pmod{30}\equiv16\pmod{30}$ and $F_{n+1}\equiv17\pmod{30}$.

To generalize, I propose to use primoradic (see stub OEIS: https://oeis.org/wiki/Primorial_numeral_system).

That's the way I found that result, which I did't know before and I wonder if there are other results with $\;210,\;2310,\;30030,\;\ldots$ (primorials)

P.S. : In primoradic, using Charles-Ange Laisant's notations for factoradic (with $A=10, B=11, C=12, D=13,\ldots$) $$17=(000000.221)$$ $$257=(000011.221)$$ $$65.537=(2.240.221)$$ $$F_5=(J.5F1.721.221)$$ Perhaps, someone could give $F_6$, $F_7$ ...in primoradic, just for fun. Fill in the holes :$$F_6=(........0.221)$$ $$F_7=(.......:1:...)$$ $$F_8=(.......:0:...)$$ $$F_9=(.......:1:...)$$ $$F_{10}=(......0:...)$$ $$F_{11}=(......1:...)$$ $$F_{12}=(......0:...)$$ $$F_{13}=(......1:...)$$ $$2^{16384}+1=(......:0:...)$$ I have verified each of these results with my spreadsheet. Maybe $F_n\equiv17\pmod{210}$ if n is even and $47$ if n is odd? We need a proof. Perhaps here :I'm trying to generalize some simple results about $ 2 ^ n $. It's useful to write them in primoradic (see stub OEIS)., will be useful.

Upper bound on $\sum_{J\subset [n], |J|=i}\frac{1}{2^{s(J)}}$ where $s(J)=\sum_{j\in J}j$

Posted: 09 May 2021 08:08 PM PDT

Define $s(J)=\sum_{j\in J}j$ and $[n]=\{1,...,n\}$. I'm trying to get some reasonable upper bound on $$\sum_{J\subset [n], |J|=i}\frac{1}{2^{s(J)}}.$$

Actually, I want an upper bound on $$\sum_{i=k}^{2k}\sum_{J\subset [n], |J|=i}\frac{1}{2^{s(J)}}$$ for a fixed $k<\frac{n}{2}$.

The best I could get was $$\sum_{i=k}^{2k}\sum_{J\subset [n], |J|=i}\frac{1}{2^{s(J)}}\leq \sum_{i=0}^{n}\sum_{J\subset [n], |J|=i}\frac{1}{2^{s(J)}}=\prod_{i=1}^n \left(1+\frac{1}{2^i}\right).$$

I'd appreciate if someone could find anything better than that, preferentially depending on $k$.

Edit: Using $$\sum_{J\subset [n], |J|=i}\frac{1}{2^{s(J)}}\leq \sum_{J\subset [n], |J|=i}\frac{1}{2^{\frac{i(i+1)}{2}}}$$ wasn't enough for me either.

Prove that $G$ is a block if and only if given a vertex and an edge of $G$ there exists a cycle containing them.

Posted: 09 May 2021 07:39 PM PDT

Prove that $G$ is a block if and only if given a vertex and an edge of $G$ there exists a cycle containing them.

Attempt:

Let $u\in V(G)$ and $a\in E(G)$, $a=wv$. By hypothesis, there exists a cycle $C$ containing $u$ and $w$. As for $v$ there are two possibilities: that it is part of the vertices of the cycle or that it is not. If $v\in V(C)$ we obtain a cycle $C'$ containing $u$ and edge $a$ as follows:

\begin{align*} C&=u, u_1, \ldots , w, w_1, \ldots v, v_1, \ldots u \\ C'&=u,\: u_1, \ldots , w, v, v_1, \ldots u \end{align*}

That is, we replace the part $C$ between $w$ and $v$ by the edge $a$. If $v\notin V(G)$ we have a situation equal to the previous demonstration and we can find two paths from $u$ to $v$, one of which contains edge $a$.

How can I prove these two questions without using the following theorem?

Posted: 09 May 2021 08:12 PM PDT

Question 1: Let $X_1, X_2, \cdots$ be independent random variables such that $$P(X_n=-n^{\theta})=P(X_n=n^{\theta})=\frac{1}{2}.$$ If $\theta > -\frac{1}{2}$ prove that the Lyapunov condition works and the sequence satisfies the central limit theorem.

Question 2: Let $X_1, X_2, \cdots$ be independent random variables such that $$P(X_n=-n^{\theta})=P(X_n=n^{\theta})=\frac{1}{6n^{2(\theta -1)}}\quad \text{and} \quad P(X_n=0)=1-\frac{1}{3n^{2(\theta -1)}}.$$ If $1< \theta < \frac{3}{2}$ prove that the Lindeberg condition works works and the sequence satisfies the central limit theorem.

THEOREM: Let be $\lambda >0$, then $$\frac{1}{n^{\lambda +1}}\displaystyle\sum_{k=1}^{n} k^{\lambda}\underset{n\to +\infty}{\longrightarrow} \frac{1}{\lambda+1},$$ in such a way that $\displaystyle\sum_{k=1}^{n} k^{\lambda}$ has the order $\mathcal{O}= n^{\lambda + 1}$.

Solution of the question 1: $EX_n=0\; \forall n\in \mathbb{N}$, $Var(X_n)=EX_n^2=n^{2\theta}$ and $\displaystyle\sum_{k=1}^{n} Var X_k = \displaystyle\sum_{k=1}^{n}k^{2\theta}$ is (by the theorem above) $\mathcal{O}(n^{2\theta +1})$. Also $S_n=\left(\sum_{k=1}^{n}Var(X_k)\right)^{\frac{1}{2}}$ is (using the theorem above) $\mathcal{O}=\left(n^{(2\theta+1)/2}\right)$. By the Lyapunov condition there exists $\delta >0$, such that \begin{align*} \lim_{n\to +\infty} \frac{1}{s_n^{2+\delta}} \displaystyle\sum_{k=1}^{n} E|X_k|^{2+\delta} &=lim_{n\to +\infty} \frac{1}{2^{2+\delta}}\displaystyle\sum_{k=1}^{n} K^{(2+\delta)\theta}\\ &=\lim_{n\to +\infty} \frac{\mathcal{O}\left(n^{(2+\delta)\theta+1}\right)}{\mathcal{O}\left(n^{(2+\delta)(2\theta +1)/2}\right)}\\ &=\lim_{n\to +\infty} \frac{\mathcal{O}(n)}{\mathcal{O}(n^{(2+\delta)/2})}\\ &=0\quad \text{for}\; \delta=2 \end{align*} Thus, the Lyapunov condition is satisfied $\forall \alpha \in \mathbb{R}$ and $\delta =2$.

Therefore, $$\frac{\displaystyle\sum_{k=1}^{n} X_k - E\sum_{k=1}^{n} X_k}{\sqrt{\displaystyle\sum_{k=1}^{n}Var(X_k)}}\overset{D}{\longrightarrow} \mathcal{N}(0,1).$$ The convergence above means that converge in distribution to standard normal distribution $\mathcal{N}(0,1)$.

REMARK: Notice that the question 1 is already answered, however I'm strying to prove again without use the theorem above. Can you help me with this?

Trying to prove corresponding angles are equal in case of parallel lines

Posted: 09 May 2021 08:08 PM PDT

I was trying to find/generate a proof that proves equality of corresponding angles:

Example 1

(keeping above figure in mind) we can safely say that the angle between the two lines will change only when there will be relative rotation between the lines.

Now when we move the horizontal line AB without causing any rotation (moving the line parallel to its original position AB) in some upward position (say the new position of the moved line becomes XY (check the underlying figure)

Example 2

  • there is no rotation hence the angle between CD and XY doesn't change - hence the new angle CZY will be equal to angle CEB --> hence in case of parallel lines cut by a transversal the corresponding angles are equal.

Are you convinced with the proof or do you find anything questionable?

No comments:

Post a Comment