Sunday, July 10, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Showing $G$ is cyclic if $G$ has two proper subgroup

Posted: 10 Jul 2022 04:02 AM PDT

Show that $G$ is cyclic if $G$ has at most two proper subgroup

First of all it is not a duplicate I know there is an answer here: If $G$ has only 2 proper, non-trivial subgroups then $G$ is cyclic

However, I did not understand the answer. So I will write my attemption.

My Attempt:

$\underline{\text{Case 1 : } \vert G\vert=pq, \text{where p and q are primes}}$

$\vert H\vert$ divides $pq$ and $\vert K\vert$ divides $pq$ by Lagrange Theorem. By assumption these subgroups are unique. Thus G is cyclic. Since there is at most four subgroups with orders $1, p,q,pq$.

$\underline{\text{Case 2 : } \vert G\vert=p^2, \text{where p is prime}}$

Again Lagrange implies only three subgroups of order $1,p,p^2$. A contradiction! So, in $pq$ we can assume $p \neq q$

$\underline{\text{Case 3 : } \vert G\vert=n, \text{where n is composite}}$

Since $n$ is composite we can write $n=mk$, where $1<m,k<n$. Again by Lagrange Theorem we have:

$$\vert H \vert \text{ divides } m\cdot k \text{ and } \vert K\vert \text{ divides } m\cdot k$$

These are only subgroups. Hence this implies $m$ and $k$ must be prime. Otherwise, some subgroup of order $l$, where $l$ divides $m\cdot k$. But this means we have more then $2$ subgroups. This means $n=pq$ with $p \neq q$.

Are there any mistake?

Analyzing a bracket ordering impartial game

Posted: 10 Jul 2022 04:01 AM PDT

The game begins with $n$ sets of brackets in the outer layer.
[][][]
Players then take turns choosing a pair of brackets, then placing it and all its contents into a different bracket in the same layer.
[][][] -> [][[]] -> [[[]]]
Another example; From [()][()], the only move available is [()[()]] (since order does not matter). Note round brackets are equivalent and are used for easier reading.
The person who cannot move loses.

As shown in the first example, $n=3$ is a win for the second player. $n=4$ is also a win for the second player:
[][][][] -> [[]][][] -> [[]][[]] -> [()[()]] -> [[[[]]]]
$n=5$ is also a win for the second player, but beyond that it is difficult to check by hand. I think that all $n\ne 2$ are a win for the second player.


If we take any position, and replace each bracket with a pair of brackets, then the position becomes a win for the second player. For example, all of [([()][()])], [()][()][([()])], etc are a win for the second player. I call these positions 'double positions'.
Proof: The first player must choose an 'outer bracket', which are shown as [], and place it inside another outer bracket. The second player can take the same bracket and put it in the inner bracket, which are shown as (). This results in another 'double position', meaning that the second player is guaranteed to always have a move.

Another observation I have made is related to stacks. Call a group of brackets an $n$-stack if there are exactly $n$ brackets with no moves available. For example:
1-stack - []
2-stack - [[]]
3-stack - [([])]
and so on. Take a position composed of two stacks of size $a$ and $b$. If $a=1$ or $b=1$, then the position is clearly a win for the first player. Otherwise, it is a win for the first player if and only if $a+b$ is odd.
This is because placing a stack inside another stack essentially reduces the size of that stack by $1$ (the outer bracket is irrelevant since it cannot be moved). This means both players will continue moving, avoiding a stack of size $1$ until there are two $2$-stacks, at which point the next player looses. But the parity of the sum of the sizes of the stacks changes after each step, so if $a+b$ was odd, then the first player will never reach [()][()].


Is anything known about this game? For which $n$ does the second player win? As stated above I think its $n\ne 2$ but I don't know how to prove it. Also, how does combining two positions affect the winner?

Ill-founded iterated ultrapowers

Posted: 10 Jul 2022 04:00 AM PDT

I have two related questions.

  1. Can I find an ultrafilter on $\kappa$ which is $\theta$-complete for some uncountable $\theta<\kappa$ but not $\kappa$-complete?

  2. We know that if $\mathcal{U}$ is a $\kappa$-complete ultrafilter on $\kappa$, then the iterated ultrapower of the universe is well-founded. I would like to know a form of converse: Are there uncountable cardinals $\theta<\kappa$ with a $\theta$-complete but not $\kappa$-complete ultrafilter on $\kappa$ such that there is necessarily $\alpha$ with the $\alpha$th iterate $Ult^\alpha$ ill-founded?

Limit question involving the recursion formula ${P_n} = {2^{{P_{n - 1}}}} - 1\forall n = 2,3,.....$

Posted: 10 Jul 2022 03:49 AM PDT

Let ${P_n} = {2^{{P_{n - 1}}}} - 1\forall n = 2,3,.....$and let ${P_1} = {2^x} - 1$ then $\mathop {\lim }\limits_{x \to 0} \frac{{{P_n}}}{x} = {\left( {\ell n\lambda } \right)^{kn}}$ where $\lambda+n$ is

My solution is as follow

${P_1} = {2^x} - 1$

${P_2} = {2^{{P_1}}} - 1 \Rightarrow {P_2} = {2^{{2^x} - 1}} - 1 = \frac{{{2^{{2^x}}} - 2}}{2}$

${P_3} = {2^{{P_2}}} - 1 = {2^{\frac{{{2^{{2^x}}} - 2}}{2}}} - 1 = \frac{{{2^{{2^{{2^x}}} - 1}} - 2}}{2}$

${P_4} = {2^{{P_3}}} - 1 = {2^{\frac{{{2^{{2^{{2^x}}} - 1}} - 2}}{2}}} - 1 = \frac{{{2^{{2^{{2^{{2^x}}} - 2}}}} - 2}}{2}$

${P_5} = {2^{{P_4}}} - 1 = {2^{\frac{{{2^{{2^{{2^{{2^x}}} - 2}}}} - 2}}{2}}} - 1 = \frac{{{2^{{2^{{2^{{2^{{2^x}}} - 3}}}}}} - 2}}{2}$

I am not able to solve further

Calculating limit of a simple sequence without guessing

Posted: 10 Jul 2022 03:49 AM PDT

Let's take the following sequence

$$ \displaystyle \lim_{n \to \infty}\frac{2n^3 + \sqrt{n} +1 }{n^3} $$

I am having problems with solving these kinds of limits and understanding some of the strategies for solving limits in general.

I can't plug in $ \infty $ because it is not a number, right?

I know I can extract the $ n^3 $ to get an equivalent expression. But I am not really sure why I can/should do this here.

As far as I understand, rewriting a sequence as an equivalent sequence is done when the original sequence has an indeterminate form as its limit. So we instead calculate the limit of an equivalent (usually simpler) sequence whose limit is the same limit as the one of the original sequence.

If that' true and if I can't plugin in $ \infty $, how can I tell that this sequence's limit is an indeterminate form?

But even after simplifying and getting $ 2 + \lim_{n \to \infty}\frac{\sqrt{n} +1 }{n^3} $

How do I proceed from here if I don't already know that something like $ \lim_{n \to \infty}\frac{n}{n^3} = 0$?

It seems to me that when solving limits, I just have to memorize some limits and recognize them or otherwise guess and use the $ \epsilon$-criterion.

lagrange formulation and compute variation

Posted: 10 Jul 2022 03:48 AM PDT

Problem formulation

My first question is why there are more Lagrange multipliers in the above formulation, $\eta_i$, and not just one Lagrange multiplier, i.e., just $\eta$ ?

Second, I do not know how to compute the variations in the above, for instance $\delta u_i$. I know the $\delta$ operator in the context of functionals, but, if my understanding is correct, I am not dealing with a functional here. Because if I insert the definition

$F_e(\varepsilon_e, \sigma_e) = \min\limits_{(\varepsilon_e^*,\sigma_e^*) \in E_e} \frac{1}{2} C_e \varepsilon _e^2 (\varepsilon_e - \varepsilon_e^*) + \frac{1}{2} \frac{\sigma_e^2}{C_e} (\sigma_e - \sigma_e^*) $

then eq. 5 in the picture above depends just on $\varepsilon_e$ and $\sigma_e$; $w_e$ is a constant for each $e$.

I would appreciate to get help to specify expressions like $\delta u_i = \cdots$

So what do I have to "insert" in the minimization problem to actually come up with the already given solution for $\delta u_i$?

Can we enhance the $j_!\dashv j^{-1}$ adjunction to hom sheaves?

Posted: 10 Jul 2022 03:36 AM PDT

$ \def\Mod{\operatorname{Mod}} \def\O{\mathcal{O}} \def\Homs{\mathcal{H}om} \def\Hom{\operatorname{Hom}} \def\F{\mathcal{F}} \def\G{\mathcal{G}} $ Let $X$ be a ringed space, $U\subset X$ be an open subset and $j:U\to X$ be the inclusion. Then the functor $j^{-1}:\Mod(\O_X)\to\Mod(\O_U)$ has a left adjoint, $j_!:\Mod(\O_U)\to\Mod(\O_X)$, the extension by zero (for details, see 009Z for example).

Given a morphism of ringed spaces $f:X\to Y$, the adjunction $f^*:\Mod(\O_Y)\rightleftarrows\Mod(\O_X):f_*$ can be enhanced to a hom sheaf isomorphism $$ f_*\mathcal{H}om_{\O_X}(f^*\mathcal{F},\mathcal{G}) \cong \mathcal{H}om_{\O_Y}(\mathcal{F},f_*\mathcal{G}). $$ (See this post, for example).

I was wondering if something similar could be performed with the adjunction $j_!\dashv j^{-1}$. Maybe something like this? $$ \tag{1}\label{sheaf_adj} \Homs_{\O_X}(j_!\F,\G)\cong j_*\Homs_{\O_U}(\F,j^{-1}\G). $$

What is the whole heuristic process that has been used by Hardy and Littlewood for proposing the k tuple conjecture?

Posted: 10 Jul 2022 03:32 AM PDT

Please give me the reference of the books or papers. There some questions on my mind. Example:

According to Hardy Littlewood conjecture the number of twin prime less than x is

            2Cr(x/(ln(x))^2)  

where Cr= 0.66016...

How did this constant arrive here? Why the value of this constant is this? Please tell me the process to get this formula and constant.

Does the proportion of the volume of the hypercube to the volume of the containing hypersphere tend to 1 as dimensions grow?

Posted: 10 Jul 2022 03:52 AM PDT

I read on this website that an "infinite dimensional" (limiting case of dimensionality) hypercube containing a hypersphere has a ratio of the hypersphere:hypercube volumes that tends to 0. Is it also true that a hypersphere containing a hypercube will have a ratio of volumes such that they tend to 1? Containing is meant to imply that the hypercube is inscribed inside of the hypersphere. Note that all vertices of the hypercube lie on the hypersphere.

If $P$ is a set of probability measures, is there a probability measure s.t. $A$ is a $\mu$-null set iff $A$ is a $\nu$-null set for all $\nu\in P$?

Posted: 10 Jul 2022 03:21 AM PDT

Let $(\Omega,\mathcal A)$ be a measurable space and $\mathcal P$ be a collection of probability measures on $(\Omega,\mathcal A)$. Can we construct a probability measure $\mu$ on $(\Omega,\mathcal A)$ such that $A\subseteq\mathcal A$ is a $\mu$-null set if and only if $A$ is a $\operatorname P$-null set for every $P\in\mathcal P$?

A question about $GL_n(\mathbb{Z})$ and $GL_n(\mathbb{F}_p)$

Posted: 10 Jul 2022 04:05 AM PDT

Let $p\ge 3$ be a prime number. $G$ is a subgroup of $GL_n(\mathbb{Z})$ and $|G|<\infty$. Let $\sigma: GL_n(\mathbb{Z})\to GL_n(\mathbb{F}_p)$ be the natural map. Prove that $\sigma|_G$ is injective.

Suppose $\exists A,B \in G,A\ne B$, s.t. $\sigma(A)=\sigma(B)$ i.e. $A=B \ mod\ p$. Since $|G|<\infty$, $\exists m,k\in\mathbb{Z}^+$, $A^m=B^k=I$. In linear algebra we know $A=C\ diag(\zeta_1,\dots,\zeta_n)\ C^{-1}$ where $\zeta_i^m=1,C\in GL_n(\mathbb{C})$. But I don't know if $m=k$. Taking trace and norm can't solve this problem. I think $A=B \ mod\ p\ $ is not easy to use.

Any ideas?

Divisors on $\mathrm{Spec}(\mathbb{Z})$ and normed-modules

Posted: 10 Jul 2022 03:23 AM PDT

Let $D$ a Weil divisor on $\mathrm{Spec} (\mathbb{Z})$, so we can write : $$ D = \sum_{p} \mathrm{ord}_p(D) [p]$$ with coefficients $\mathrm{ord}_p(D) \in \mathbb{Z}$. We define : $$ H^0(\mathrm{Spec}(\mathbb{Z}),\mathcal{O}_{\mathrm{Spec}(\mathbb{Z})}(D))= \lbrace f \in \mathbb{Q}^{*} \; : \; (f)+D \geq 0 \rbrace \cup \lbrace 0 \rbrace$$ where $(f)$ is the principal divisor associated with the rational number $f$.
$H^0(\mathrm{Spec}(\mathbb{Z}),\mathcal{O}_{\mathrm{Spec}(\mathbb{Z})}(D))$ is a free $\mathbb{Z}$-module of rank $1$.

  1. For any $n \in \mathbb{N}$, $H^0(\mathrm{Spec}(\mathbb{Z}),\mathcal{O}_{\mathrm{Spec}(\mathbb{Z})}(nD))$ is also a free module of rank $1$ ?

  2. Let $\|.\|$ be a norm on $H^0(\mathrm{Spec}(\mathbb{Z}),\mathcal{O}_{\mathrm{Spec}(\mathbb{Z})}(D)) \otimes_{\mathbb{Z}} \mathbb{R}$. We denote $B(D,\|.\|)$ the set : $$ B(D,\|.\|) = \lbrace f \in H^0(\mathrm{Spec}(\mathbb{Z}),\mathcal{O}_{\mathrm{Spec}(\mathbb{Z})}(D)) \otimes_{\mathbb{Z}} \mathbb{R} \; : \; \|f\| \leq 1\rbrace$$ the unit ball for the norm. I want to compute or etablish an explicit formula for $\log \mathrm{vol} (B(D,\|.\|))$
    But I don't have any strategy, I just know that I have to put a Haar measure on $\mathbb{R}$, can I choose the Lebesgue measure ? And then, what's the formula for $\mathrm{vol} (B(D,\|.\|))$ ?

If you have a reference for this kind of theory I would like (article or book).

Values $\theta$ such that $f+g$ and $g$ have the same domain, where $f(x)=\sqrt{\theta x^2-2(\theta^2-3)x-12\theta}$ and $g(x)=\ln(x^2-49)$

Posted: 10 Jul 2022 03:29 AM PDT

Given are two functions

$$\begin{align} f(x)&=\sqrt{\theta x^2-2\left(\theta^2-3\right) x-12 \theta} \\ g(x)&=\ln \left(x^2-49\right) \end{align}$$

What is the range of $\theta$ such that functions $f+g$ and $g$ have same domain?

I tried applying $f(7)$ and $f(-7)$ $≤ 0$ as $D$ is greater than zero for given quadratic equation inside root. But I'm missing some more relations.

What are other those relations.

Is there a function $f:[0,\infty) \to \{-1, 1\}$ such that $\int_0^\infty{f(x)\,dx}$ is well defined?

Posted: 10 Jul 2022 04:00 AM PDT

I suspect it is impossible to find such an $f$, but I can see one way it might be possible to as well.

If it is possible to construct two sets $A$ and $B$ which partition the non-negative reals such that $A \cap [0,a)$ and $B \cap [0,a)$ both have measure $a/2$ for any positive real $a$, then take $f^{-1}(1) = A$, $f^{-1}(-1) = B$. We would have $\int_0^a{f(x)\,dx}=0$ for all $a$ and $\lim_{a \to \infty}\int_0^a{f(x)\,dx}=0$.

Solving Inequality with positive numbers $a,b,c$.

Posted: 10 Jul 2022 03:50 AM PDT

$a,b,c \in \mathbb{R}, 0\leq a,b,c \leq 1$, find max $$\frac{a+b+c}{3}+\sqrt{a(1-a)+b(1-b)+c(1-c)}$$

My answer let $t=\dfrac{a+b+c}{3}$, it will be $$t+\sqrt{3t-(x^2+y^2+z^2)}\leq t+\sqrt{3t-3t^2}(\because \mathrm{CS})$$ Then $f(t)=t+\sqrt{3t-3t^2}$, and differentiate to find the extrema

Answer become $a=b=c=\dfrac{3}{4}$, and max is $\dfrac{3}{2}$.

Is there a better way of solving this?

Reference request: Slope Stability, Bridgeland Stability

Posted: 10 Jul 2022 04:01 AM PDT

Just wanted to ask for references about these topics:

  • Smooth projective curves
  • Coherent & Quasicoherent sheaves and their connection to bundles
  • Degree of a line bundle on a curve
  • Derived category of coherent sheaves
  • Harder–Narasimhan stability (filtration)
  • Bounded t-structures
  • Bridgeland stability

I have a background in homological algebra, sheaf theory & cohomology, derived categories and basic algebraic geometry (very little about schemes).

I've studied from Gelfand & Manin's book and Kashiwara & Schapira's book/notes on sheaves. Right now I'm trying to get through Ravi Vakil FOAG notes and Hartshorne's book but they're very dense, even if I've already worked out some of the exercises about sheaves.

I wander if there is some reference that encompasses these topics in a reasonable way. The aim is to have a good understanding of what Bridgeland stability is, and why do we need to work in the derived category of coherent sheaves.

Homology group calculation of Klein Bottle is not a free product

Posted: 10 Jul 2022 03:48 AM PDT

I'm attempting to calculate the homology groups of the Klein Bottle. I used the fundamental polygon below, from which $\partial_2(u)=-a-b+c,\partial_2(L)=-a+b-c,\partial_1(a)=0=\partial_1(b)=\partial_1(c)=0$. Calculating $H_2$ is easier, but when calculating $H_1$, I find $Z_1=\langle a,b,c\rangle,B_1=\langle -a-b+c,-a+b-c\rangle$, so \begin{align} H_1&\cong\langle a,b,c\ |\ -a-b+c,-a+b-c\rangle\\ &\cong\langle a,b,c\ |\ 2a,-a+b-c\rangle\\ &\cong\langle a,b\ |\ 2a\rangle\\ &\cong\mathbb{Z}*(\mathbb{Z}/2\mathbb{Z}) \end{align}

But the homology group is $\mathbb{Z}\times(\mathbb{Z}/2\mathbb{Z})$ according to a bunch of sources, so what did I do wrong?

Klein Bottle fundamental polygon

Explicit solution to Poisson equation on torus

Posted: 10 Jul 2022 04:04 AM PDT

I am studying the Vlasov-Poisson system on a torus in three dimensions, $\mathbb{T}^3$. I consider a constant mass density $\rho = C$. What is a solution to the Poisson equation $\Delta U = 4\pi\rho $ for this mass density? Unfortunately, I am not used to working with a torus.

Is a solution just \begin{align} U = \frac{2\pi}{3}C \,x^2 \end{align} or is this just the case for $\mathbb{R^3}$? Please let me know if something about my question is unclear. Thank you!

How does changing definitions of lower sum and upper sum affect Darboux integration?

Posted: 10 Jul 2022 03:42 AM PDT

Let $P=\{x_i\}_{i=0}^{n}$ be a partition of $[a,b]$ and $f$ is bounded on $[a,b]$. The usual definition of a lower and upper sum is $L(P,f) = \sum m_i \cdot (x_i - x_{i-1})$ and $U(P,f)= \sum M_i \cdot (x_i - x_{i-1})$ where $m_i = \inf\limits_{[x_{i-1}, x_{i}]}f $ and $M_i = \sup\limits_{[x_{i-1}, x_{i}]}f $ but what happens if I define lower and upper sums by excluding the endpoints like this:

$L'(P,f) = \sum m'_i \cdot (x_i - x_{i-1})$ and $U'(P,f)= \sum M'_i \cdot (x_i - x_{i-1})$ where $m'_i = \inf\limits_{(x_{i-1}, x_{i})}f $ and $M'_i = \sup\limits_{(x_{i-1}, x_{i})}f $

And rest all the definitions for lower and upper integrals etc. remain the same.

How would one go about showing that the two approaches would be equivalent?


A rough sketch in my mind would be since a bounded function $f$ is Darboux integrable (DI) on $[a,b]$ $\iff$ it is DI on each $[x_{i-1},x_i]$ $\iff$ it is DI on each $(x_{i-1},x_i)$ (because endpoints don't affect integrability) $\iff$ it's DI on $[a,b]\setminus \{x_i\}_{i=0}^n$ and hence also on $[a,b]$ (because discontinuity on finite points don't affect integrability). Is this good?

Is it always possible to find a joint distribution $p(x_1,x_2,x_3,x_4)$ consistent with these local conditional distributions?

Posted: 10 Jul 2022 03:39 AM PDT

I am currently studying Bayesian Reasoning and Machine Learning by David Barber, the 4th chapter exercise 4.1 (p 79). The exercise is the following:

Exercise 4.1

  1. Consider the pairwise Markov network, $$p(x) = \phi(x_1,x_2)\phi(x_2,x_3)\phi(x_3,x_4)\phi(x_4,x_1)$$ Express in terms of $\phi$ the following: $$p(x_1|x_2,x_4), p(x_2|x_1,x_3),p(x_3|x_2,x_4),p(x_4|x_1,x_3)$$
  2. For a set of local distributions defined as $$p(x_1|x_2,x_4), p(x_2|x_1,x_3),p(x_3|x_2,x_4),p(x_4|x_1,x_3)$$ is it always possible to find a joint distribution $p(x_1,x_2,x_3,x_4)$ consistent with these local conditional distributions?

Now, I've done the whole first part. I have a solution manual and checked my derivations are correct. Here's the first expression:

$$p(x_1|x_2,x_4) = \frac{\sum_3\phi(x_1,x_2)\phi(x_2,x_3)\phi(x_3,x_4)\phi(x_4,x_1)}{\sum_{1,3}\phi(x_1,x_2)\phi(x_2,x_3)\phi(x_3,x_4)\phi(x_4,x_1)} \\ = \frac{\phi(x_1,x_2)\phi(x_4,x_1)\sum_3\phi(x_2,x_3)\phi(x_3,x_4)}{\sum_1\phi(x_1,x_2)\phi(x_4,x_1)\sum_3\phi(x_2,x_3)\phi(x_3,x_4)}\\ = \frac{\phi(x_1,x_2)\phi(x_4,x_1)}{\sum_1\phi(x_1,x_2)\phi(x_4,x_1)}$$

What I don't understand is the manual's answer to the second question, in particular how their equation is correct. I also don't understand the question itself.

Here's their answer:

It is not always possible to do this. One way to see this is to consider for example $$p_1(x_1|x_2,x_4)\sum_{x_1,x_3}p(x_1,x_2,x_3,x_4) = p(x_1,x_2,x_3,x_4)$$

What does $p_1$ mean?

How is this correct? Since as I've shown before

$$p(x_1|x_2,x_4) = \frac{\sum_3\phi(x_1,x_2)\phi(x_2,x_3)\phi(x_3,x_4)\phi(x_4,x_1)}{\sum_{1,3}\phi(x_1,x_2)\phi(x_2,x_3)\phi(x_3,x_4)\phi(x_4,x_1)} \\ \Leftrightarrow \\ p(x_1|x_2,x_4) = \frac{\sum_3 p(x_1,x_2,x_3,x_4)}{\sum_{1,3} p(x_1,x_2,x_3,x_4)} \\ \Leftrightarrow \\ p(x_1|x_2,x_4) \sum_{1,3} p(x_1,x_2,x_3,x_4)= \sum_3 p(x_1,x_2,x_3,x_4)$$

So how is that correct? Also, what does it mean for a distribution to be consistent exactly?

Finding the center and axes of an ellipse in 3D

Posted: 10 Jul 2022 04:05 AM PDT

I am studying ellipses and need to prove the intersection of a plane and an elliptical cylinder is still an ellipse in 3D. My approach is to find its center and axes and have been struggling.

I have a parametric equation, $\left(x,\, y,\, z\right) = \left(\frac{1}{4}-\frac{3}{2}\cos t-\frac{1}{4}\sin t,\, 2\cos t,\, \sin t \right)$ . This is an intersection between a plane $4x+3y+z=1$ and an elliptical cylinder $y^2+4z^2=4$.

How do I find its center and two axes? Equivalently, how can I express it in the vector form:

$$\mathbf x (t)=\mathbf c+(\cos t)\mathbf u+(\sin t)\mathbf v$$

where $\mathbf u$ and $\mathbf v$ are orthogonal vectors from the center $\mathbf c$ whose norms represent the lengths of the axes and whose directions represent the directions of the axes.

$\left\lfloor\frac{8n+13}{25}\right\rfloor-\left\lfloor\frac{n-12-\left\lfloor\frac{n-17}{25}\right\rfloor}{3}\right\rfloor$ is independent of $n$

Posted: 10 Jul 2022 04:03 AM PDT

If $n$ is a positive integer, prove that $$\left\lfloor\frac{8n+13}{25}\right\rfloor-\left\lfloor\frac{n-12-\left\lfloor\frac{n-17}{25}\right\rfloor}{3}\right\rfloor$$ is independent of $n$.

Taking $$f(n)=\left\lfloor\frac{8n+13}{25}\right\rfloor-\left\lfloor\frac{n-12-\left\lfloor\frac{n-17}{25}\right\rfloor}{3}\right\rfloor$$ I could prove that $$f(n+25)=f(n)$$ But, I have no idea how to show $$f(n+k)=f(n)\\\forall k\in \{1,2,\dots 24\}$$ Of course, we can check these finite number of cases by force. But is there any other elegant method to handle this?

How to show a polynomial $f(x,y)$ over an alg closed field cannot have a zero at exactly one point?

Posted: 10 Jul 2022 03:33 AM PDT

Let $k$ be an algebraically closed field. Let $f \in k[x,y]$ be a given polynomial.

Could anyone explain how to show that if $f(x,y) \neq0$ for all $(x,y) \in (k\times k)\setminus\{(0,0)\}$, then $f$ is constant?

understanding index notation in a matrix differential equation

Posted: 10 Jul 2022 03:20 AM PDT

Suppose we have an equation $\dot x=A(t)x$, further suppose $A(t)=\sum_{l=0}^{\infty} A_{l}\mathbf{1}_{[T_l,T_{l+1})}$, where $T_l$ for $l=0,1,2,\dots$ is a sequenece of time-instant and $A_l$ is a sequence of constant matrices. The fundamental matrix is given by then $$\mathcal{M}(t,t_0)=e^{(t-T_{l(t)}){A_{l(t)}}}\mathcal{M}(T_{l(t)},t_0)$$, where the index $l(t)$ is determined so that $t\in [T_{l(t)},T_{l(t)+1} )$.

I am having terrible confusion in understanding the index-notation $l(t)$, could anyone explain to me why it is used and what is its function here? Also, it would be great to learn how the fundamental matrix expression is becoming like that above. Thank you for your help. Ref: Page-5, here

Imaginary Numbers in the Ring of $p$ Adic Integers?

Posted: 10 Jul 2022 04:01 AM PDT

I am currently reading the book "ultrametric calculus: an introduction to $p$ adic analysis" by Schikhof, and I came across this problem:

"Prove that $x^2+1=0$ has no solutions in $\mathbb{Z}_3$ but has two solutions in $\mathbb{Z}_5$."

Here, $\mathbb{Z}_p$ denotes the ring of $p$ adic integers. The issue that I am having is that the book has never given any theorems for irreducibility in $\mathbb{Z}_p$ or any helpful theorems related to finding solutions to polynomials $f(x)\in\mathbb{Z}_p[x]$ up until this point, so I am assuming that there must be some way to do this solely on the construction of $\mathbb{Z}_p$? I am unsure on how I could do this, and could use some help as I have completely run out of ideas. Any help is appreciated.

$x(x-1)^2+\alpha$ bifurcation diagram [closed]

Posted: 10 Jul 2022 03:54 AM PDT

I have to find all the equilibrium points of $$\dot{x}=x(x-1)^2+\alpha$$ and sketch the corresponding bifurcation diagram, but I don't see how to start, since the roots of this polynomial don't have 'nice' expressions. Do you know how to deal with this kind of systems?

Prove that $\sum_{1\le i<j\le n}\frac{x_ix_j}{(1-x_i)(1-x_j)} \le \frac{n(n-1)}{2(2n-1)^2}$

Posted: 10 Jul 2022 04:04 AM PDT

Let $n \ge 2$ be a an integer and $x_1,...,x_n$ are positive reals such that $$\sum_{i=1}^nx_i=\frac{1}{2}$$ Prove that $$\sum_{1\le i<j\le n}\frac{x_ix_j}{(1-x_i)(1-x_j)}\le \frac{n(n-1)}{2(2n-1)^2}$$

Here is the source of the problem (in french) here

Edit:

I'll present my best bound yet on $$\sum_{1\le i<j\le 1}\frac{x_ix_j}{(1-x_i)(1-x_j)}=\frac{1}{2}\left(\sum_{k=1}^n\frac{x_k}{1-x_k}\right)^2-\frac{1}{2}\sum_{k=1}^n\frac{x_k^2}{(1-x_k)^2}$$ This formula was derived in @GCab's Answer.

First let $a_k=x_k/(1-x_k)$ so we want to prove $$\left(\sum_{k=1}^na_k\right)^2-\sum_{k=1}^na_k^2\le \frac{n(n-1)}{(2n-1)^2}$$ But since $$\frac{x_k}{1-x_k}<2x_k\implies \sum_{k=1}^na_k<1 \quad (1)$$ Hence $$\left(\sum_{k=1}^na_k\right)^2\le \sum_{k=1}^na_k$$ Meaning $$\left(\sum_{k=1}^na_k\right)^2-\sum_{k=1}^na_k^2\le\sum_{k=1}^na_k(1-a_k)$$

Now consider the following function $$f(x)=\frac{x}{1-x}\left(1-\frac{x}{1-x}\right)$$

$f$ is concave on $(0,1)$ and by the tangent line trick we have $$f(x)\le f'(a)(x-a)+f(a)$$

set $a=1/2n$ to get $$a_k(1-a_k)\le\frac{4n^2\left(2n-3\right)}{\left(2n-1\right)^3}\left(x_k-\frac{1}{2n}\right)+ \frac{2(n-1)}{(2n-1)^2}$$ Now we sum to finish $$\sum_{k=1}^na_k(1-a_k)\le \frac{2n(n-1)}{(2n-1)^2}$$

Maybe by tweaking $(1)$ a little bit we can get rid of this factor of $2$

Intuition of a Function as an Infinite Dimensional vector

Posted: 10 Jul 2022 03:24 AM PDT

Functions as infinite dimensional vectors make sense intuitively after studying much linear algebra, but I only recently realized I've been taught two ways of thinking about them.

For a one dimensional function $f(x)$ you can expand in the basis of polynomials (a taylor series) and you get an infinite dimensional vector where the $i$th component is attached to the basis $x^{i}$, or you can think of the function as a list of all its values over its domain e.g. $ f = ...f(-0.01),f(0),f(0.01)... $ where 0.01 goes to 0.

In the former case, differentiation and integration become matrix multiplication, which is a nice property, and the latter case makes things like the inner product of functions feel a lot more natural.

But in the former case the length of the vector is the cardinality of the natural numbers, as opposed to the latter where it has the size of the reals, so these clearly aren't equivalent representations.

Is there any relationship between these two ways of thinking of a function as an infinite dimensional vector?

Correlation between sleep hours and brain activity

Posted: 10 Jul 2022 04:07 AM PDT

Say I am tracking my sleep for 1 week and these are the number of hours I sleep each night:

(5, 6, 9, 4, 8, 9, 6)

everyday that I track my sleep I am also taking a test that measures how well my brain is working these were my scores:

(45, 23, 33, 48, 68, 19, 26)

The higher the score the better.

Given these numbers, I would like to find out the optimal number of hours I would have to sleep to score the highest on this test. For example, the answer could be X amount of hours would give me the best chance of scoring highest on this test. I would like to know how to find X.

Any help that points me in the right direction would be incredibly appreciated!

Finding integrating factor for inexact differential equation?

Posted: 10 Jul 2022 03:40 AM PDT

Suppose we have the following differential equation that is NOT exact, i.e. $M_y \ne N_x$: $2xy^3+y^4+(xy^3-2y)y'=0$

How would I find an integrating factor $μ(x,y)$ so that when I multiply this integrating factor by the differential equation, it become exact?

Update: Here's what I got: enter image description here

No comments:

Post a Comment