Saturday, July 31, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Understanding the proof that $A_1+A_2\mathop{\longrightarrow}\limits^{\begin{pmatrix}1&0\\0&1\end{pmatrix}}A_1\times A_2$ is an isomorphism

Posted: 31 Jul 2021 07:44 PM PDT

Fix an abelian category. Let $A_1$ and $A_2$ be objects and let $A_1\times A_2$ be their product and $A_1+A_2$ their coproduct (i.e., sum). I am trying to understand the proof of the following theorem.

Theorem 2.35 for abelian categories $A_1+A_2\mathop{\longrightarrow}\limits^{\begin{pmatrix}1&0\\0&1\end{pmatrix}}A_1\times A_2$ is an isomorphism.

Proof: Let $K\to A_1+A_2$ be the kernel of $\begin{pmatrix}1&0\\0&1\end{pmatrix}$. Then $K\longrightarrow A_1+A_2\mathop{\longrightarrow}\limits^{\begin{pmatrix}1&0\\0&1\end{pmatrix}}A_1\times A_2\mathop{\longrightarrow}\limits^{p_2}A_2\ =\ K\longrightarrow A_1+A_2\mathop{\longrightarrow}\limits^{\begin{pmatrix}0\\1\end{pmatrix}} A_2$ and $K\longrightarrow A_1+A_2$ is contained in $A_1\mathop{\longrightarrow}\limits^{u_1}A_1+A_2$. Similarly it is contained in $A_2\mathop{\longrightarrow}\limits^{u_2}A_1+A_2$, and hence it is contained in their intersection, which is zero. Thus $K=O$ and $\begin{pmatrix}1&0\\0&1\end{pmatrix}$ is monomorphic. Dually it is epimorphic and hence an isomorphism.

I don't understand the second sentence of the proof. Why are $K\longrightarrow A_1+A_2\longrightarrow A_1\times A_2\longrightarrow A_2$ and $K\longrightarrow A_1+A_2 \longrightarrow A_2$ the same, and why is $K\longrightarrow A_1+A_2$ is contained in $A_1\longrightarrow A_1+A_2$?

Find a simpler description for each of the following rings.

Posted: 31 Jul 2021 07:44 PM PDT

Find a simpler description for each of the following rings:

a) $\mathbb{Q}[x]/(x^5+x^3)$

b) $\mathbb{Z}[x]/(x-2,x^2+3x)$

Solution of (a):
$$ \dfrac{\mathbb{Q}[x]}{(x^5+x^3)} \cong \dfrac{\mathbb{Q}[x]}{(x^3(x^2+1))} \cong \dfrac{\mathbb{Q}[x]}{(x^3,x^2+1)} \cong \dfrac{\mathbb{Q}[x]/(x^2+1)}{(x^3,x^2+1)/(x^2+1)} \cong \dfrac{\mathbb{Q}[i]}{(x^3)} $$

  • Is this correct?

Solution of (b): $$ \dfrac{\mathbb{Z}[x]}{(x-2,x^2+3x)} \cong \dfrac{\mathbb{Z}[x]}{(x-2, x, x+3)} \cong \dfrac{\mathbb{Z}[x]/(x)}{(x-2, x, x+3)/(x)} \cong \dfrac{\mathbb{Z}}{(x-2, x+3)} $$

Note that $x-2, x+3$ are pairwise relatively prime in $\mathbb{Z}[x]$. By Chinese Remainder Theorem, $$ \dfrac{\mathbb{Z}}{(x-2, x+3)} \cong \dfrac{\mathbb{Z}}{(x-2)} \times \dfrac{\mathbb{Z}}{(x+3)} $$

  • Can I say the last line $\cong \mathbb{Z}_2 \times \mathbb{Z}_3$?

Connections of estimator and least-square fit?

Posted: 31 Jul 2021 07:42 PM PDT

I don't have too much background in statistics, but I wonder if there's any connection between an estimator and a least-square fit. Can I understand the least-square fit as an option of estimator that returns a prediction of unknown results for us? Suppose I want to give a Gaussian fit to my data $\{(x_i,y_i)\}$, and there're 4 variables: $$ G(x) = B+A\exp\left[-\left(\frac{x-\mu}{\sigma}\right)^2\right] $$ Is this an estimator of the trend/distribution of my data set? If I'm particularly interested in determining the Gaussian center $\mu$ of my data, is there a way I can find an estimator of that?

For which prime $p$, is the group units $R^*$ cyclic?

Posted: 31 Jul 2021 07:40 PM PDT

I'm studying for my qualifying exam and found the following question in the question bank of previous years paper:

Let $p$ be a prime and $\mathbb{F}_p$ the fields of $p$ elements. Set $R=\mathbb{F}_p[x]/(x^3)$. For which prime $p$, is the group of units $R^*$ cyclic?

What I've got so far is that: $R$ is local ring and any element in $R$ with non-zero constant term will be a unit. I have proven it for $p=2$, but I don't know if it will be true for any other primes. Also, I have proven that if $r=a\overline{x}^2+b\overline{x}+c$ is the generator of the cyclic group $R^*$ then $b\not=0$. I tried to look at the powers of $r$ hoping that I might be able find some element in $R^*$ which can't be expressed as $r^n$, for some $n$, but it got complicated and now I'm completely lost.

Norm of linear map $f:R^n \rightarrow R^m$ from its matrix

Posted: 31 Jul 2021 07:47 PM PDT

Assume $f:R^n \rightarrow R^m$ is a linear map. Therefore, $f$ can be represented as a matrix. Namely, for any $x=(x_1,...,x_n)\in R^n$ \begin{align} f(x)= (x_1,...,x_n) \begin{pmatrix} a_{11} & ... & a_{1m} \\ ... & ... & ... \\ a_{n1} & ... & a_{nm} \end{pmatrix} =xA \end{align} I think the matrix $A$ contains all imformation of $f$. Define the norm of $f$ as $$ \|f\| = \sup_{x\in R^n} \frac{|f(x)|}{|x|} $$ where $|x|$ is the Euclidean norm. Therefore, there should be a way to calculate the $\|f\|$ from $A$. But I can't find what of $A$ equals to $\|f\|$. For example, the eigenvalue of $A$ is not $\|f\|$ (in fact, $A$ may not has eigenvalue, since may $n\not = m$).

PS: I have a little algebra knowledge. Only a little linear algebra I know.

Determine the order of operator norm of Gaussian random matrix raised to the 4th power

Posted: 31 Jul 2021 07:29 PM PDT

Let $A\in \mathbb{R}^{n\times n}$ be a random matrix whose entries are i.i.d. $\mathcal{N}(0,1)$. I know that $\mathbb{E}\|A\|\lesssim \sqrt{n}$, where $\|\cdot\|$ is the operator norm. But I have no idea how to show that $\mathbb{E}\|A\|^4\lesssim n^2$. Jensen's inequality does not work here. May I have some hint/reference on this problem? Thank you!

Attempt at a 2nd Collatz Conjecture

Posted: 31 Jul 2021 07:09 PM PDT

I am watching fascinating videos about number theory for a while now, and I put together my own little mindworld about how prime numbers and Collatz is working. Please correct me if I'm wrong, all I have is high school maths from 15 years ago to help me.

So 1) Prime numbers.

As far as I understood, mathematicians think they are random, but my understanding of random is different, so what they mean by random is uncalculable/unpredicatable. Because if I look at the primes what they are? They are the empty spaces left by a multiplication table.

Basically 13 is a prime, because none of the previous positive integers any variation of multiplication can results those numbers.

So basically if you do a multiplication table with 2x2, 2x3, etc. with all integers, the ones missing from the results will be the primes, am I correct?

1 2 3 4 5

2 4 6 8 10

3 6 9 12 15

4 8 12 16 20

5 10 15 20 25

The numbers missing from this table are: 1, 2, 3, 5, 7, 11, 13.. basically the primes, and if do a bigger table, more primes will reveal itself. Obviously this table only capable of telling primes until the prime 5 relaibly, but this is how my thinking is.

So What is really Collatz Conjecture? It basically looking for a number which is the power of 2, (or every digit is 0 in binary except the first). If it finds one, it will go towards the bottom. If it's not, it will go toward the botton for how many 0 digits there are (by deleting them) at the end of the binary value, then change the value by multiplying with 3 and adding 1 to get another number which will have 0 at the end. It keeps repeating this loop, until every digit is 0, except the first one, then it can reach the loop.

It means that Collatz Conjecture is a proof that every number is the sum of multiple powers of 2. for example: 11 is 1*(2^3) + 0*(2^2) + 1*(2^1) + 1*(2^0). These are basically how we write numbers in binary: 1011. Collatz is looking for a number which is the power of 2, and if it doesn't find it, it moves to another number by multiplying 3 and adding 1. Sooner or later you will get to a power of 2 and have to the loop.

So that means that other conjectures could be built, with other base system. So I took base 3, and did the same thing. The problem is, now we have 3 variant in base 3. A number ending in 0, 1 or 2.

Obviously we are looking for a number which is ending only 0's, except for the first digit (1000, 2000, 10000, 20000, etc), so my rule is

  • If the single digit sum of all digits of N are 1, 4 or 7, then N*2+1

  • If the single digit sum of all digits of N are 2, 5 or 8, then N*2-1

  • If the single digit sum of all digits of N are 3, 6 or 9, then N/3

This -I believe- will always result in the loop 3-1-3-1

My question is, do I understand these math problems right? Does anyone did another Colletz Conjecture in another base besides base 2?

Sorry for my english, it is only my second language.

Update: And how that relates to prime numbers? I had a train of thought about it, but it's almost 4 am now.

Difference between weak aand distributional derivatives

Posted: 31 Jul 2021 07:42 PM PDT

I'm studing weak and distributional derivatives and solutions and I have a few questions about it.

  1. From my understanding, one defines a weak derivative of $u \in L^{1}_{loc}(\Omega)$ such that

$$ \int_{\Omega} D^{\alpha} u \varphi dx = (-1)^{|a|}\int_{\Omega} v D^{\alpha}\varphi dx$$

for all test functions $\varphi \in C^{\infty}_{c}$ and $v \in L^{1}_{loc}(\Omega)$. So $v$ is a weak derivative of u.

My question is: what is the difference between this definition and the definition of distributional derivatives? From what I understood, the integral of ($\int_{\Omega} u \varphi dx$) defines a distribution so the distributional derivative is defined as seen in equation above.

I understand that while working with weak derivatives, I'm differentiating functions and when one is working with distributional derivatives it's obviously differentiating distributions. But what is the difference between these two definitions?

  1. Is it necessary, for the definition of a weak derivative, that the function $u \in L^{1}_{loc}$?

Thanks.

Mandelbrot set; are these trajectories chaotic?

Posted: 31 Jul 2021 07:02 PM PDT

I am using the Mandelbrot set, with an exponent of 2 so that the iterative equation is z = z^2 + c. The escape threshold is 4.0, and the maximum number of iterations is 5000.

I find that all trajectories that belong in the set are cyclical, with the exception of some trajectories that do not repeat during the iterations. The four starting locations are:

-1, -0.25
-1, 0.25
0.25, -0.5
0.25, 0.5

Are these trajectories chaotic?

Trajectories

Solve $Ax=b$ subject to $x=Cy+d$

Posted: 31 Jul 2021 07:37 PM PDT

$Ax=b$ is a linear system of equations in which $A$ is a square invertible matrix. However, I would like to approximately solve $Ax=b$ in which the constraint $x=Cy+d$ is exactly enforced. $C$ is $m\times n$ where $m>n$.

I need analytical solutions and am not interested in numerical approaches.

Thank you

Let R be a PID. Then, every nonempty set of ideals of R has a maximal element

Posted: 31 Jul 2021 06:58 PM PDT

I came across the following proof for this proposition but there are some things I don't understand.

Proof

Let $S$ be the set of all proper ideals of $R$. It follows that $S$ is non-empty and it is partially ordered by inclusion. Let $I_1 \subseteq I_2 \subseteq $ ... be an arbitrary increasing chain of ideals in $S$. Let $I=\bigcup_{n} I_n$.

Since the chain of $I_n$'s are nonempty, it follows that $I$ is nonempty. $I$ is an ideal. Since $R$ is a PID, $I = (a)$. We find that $a \in I=\bigcup_{n} I_n$ so $a \in I_n$ for some $n$. We get $I_n = I_{n+1} =$ ....

Each chain of ideals has an upper bound. By Zorn's lemma, the nonempty set of $I_n$'s of $R$ has a maximal element,the maximal ideal containing $I$.

Questions

Why can we use the set of all proper ideals of $R$ for our proof when it talks about any set?

Why is it true that $I_n = I_{n+1}$? I'm hoping it's a typo because what it really uses is that the previous is containned in the next.

I'm also not sure why the maximal would contain $I$, I understand it's not relevant to the proof since it's only asking us to prove it exists but I got curious.

Find closed form of $a_{n+1} = 4a_n - 2$

Posted: 31 Jul 2021 07:04 PM PDT

Is there a easy way to find the recurrence for $a_{n+1} = 4a_n - 2$, with $a_1 = 2$?

I was learning about finding the closed form of Fibonacci-styled recurrences like $a_{n+2} = 2a_{n+1} - 3a_{n}$ or something, and then this problem appeared. The textbook did a ton of rearranging to cancel out the $-2$ term and get it into the form of a Fibonacci-styled recurrence. I'm wondering if there is a easier way.

Proof of Euclidian's algorithm for non-coprime numbers

Posted: 31 Jul 2021 07:26 PM PDT

This question is part of my assignment and I am really struggling with it.

Let us now apply these steps to a more general situation. As before we will suppose that the Euclidean algorithm runs in $3$ steps.

$$r_0 ÷ r_1 =m_1 R\;r_2$$

$$r_1 ÷ r_2 =m_2 R\;r_3$$

$$r_2 ÷ r_3 =m_3 R\;0$$

Prove that $r_3$ is factor of $r_0$ and $r_1$.


They have given the example of $\gcd(81,33) = 3$.

  1. The questions is asking for a proof to show why $3$ (the GCD should be a factor of $15,33$ and $81$) You should have found that the last step of the algorithm gives 15 3 = 5R0. Use this to conclude that 3 is a factor of 15.
  2. You should have found that the second step of the algorithm gives 33 15 = 2R3. Use this to conclude that 3 is a factor of 33 and 15.
  3. You should have found that the first step of the algorithm gives 81 33 = 2R15. Use this to conclude that 3 is a factor of 81 and 33.

I understand that since 15 is directly divisible by 3 , 15 has one of it's factor as 3. And intuitively also I understand that 33 and 81 has factors of 3, but I do not know how can I prove that with what they are asking me.

Limit of the n-root of this sequence.

Posted: 31 Jul 2021 07:04 PM PDT

I'm struggling to understand why $\lim_{n\to\infty} \sqrt[n]{\sin(\frac{1}{n^2})}=1$ while $\lim_{n\to\infty}\sin(\frac{1}{n^2})=0.$ What causes the sequence to tend to $1$ while it's under the $n$-root?

Sequence and multiplicity of sums of integers squared $n_1^2+n_2^2+\dots+n_d^2$?

Posted: 31 Jul 2021 07:40 PM PDT

Consider a summation over $d$ integers such as

$$R_d=\sum_{n_1,n_2,\dots,n_d\in\mathbb{Z}}f\left(\sum_{i=1}^d n_i^2\right)$$

Is there a way to express this sum in terms of a specific integer sequence $S$, such that

$$R_d = \sum_{n\in S}M_n f(n)$$

with term multiplicity $M_n$?

What is the sequence $S$ and the multiplicity $M_n$ given by in this case?

Validity of proof $\zeta(s) \neq 0$ for $\sigma >1$.

Posted: 31 Jul 2021 07:30 PM PDT

The Riemann zeta function is can be expressed as an infinite series, as well as an infinite Euler product over primes $p$.

$$ \zeta(s) = \sum_n 1/n^s = \prod_p(1-1/p^s)^{-1} $$

Here $s=\sigma+it$ and the function is defined for $\sigma>1$.

A natural question to ask is whether there are any zeros in the region $\sigma>1$.

Question: Is the following outline proof sufficient and correct?


Step 1

We note that none of the factors $(1-1/p^s)^{-1}$ is ever zero, because $p^s = e^{s\ln(p)} \neq 0$ for $\sigma>1$.


Step 2

The previous step is insufficient. We also need to demonstrate that the infinite product doesn't converge to zero.

To do this, we make use of the following convergence criteria:

  • if $\sum|a_n|$ converges, then $\prod(1+a_n)$ converges to a finite non-zero value.

Step 3

In this case $\sum |a_n| = \sum 1/p^s$, which we know converges for $\sigma > 1$.

Therefore the Euler Product converges to a non-zero finite value.

Homotopy groups of projective Lie groups PO(N), PSO(N), and PSpin(N)

Posted: 31 Jul 2021 06:49 PM PDT

Previously, we have learned from Homotopy groups O(N) and SO(N): $\pi_m(O(N))$ v.s. $\pi_m(SO(N))$ that:

  • $\pi_m(SO(N))$: a table consisting of the groups $\pi_m(SO(N))$ for $1 \leq m \leq 12$ and $2 \leq N \leq 12$ can be found on the nLab page for the orthogonal group in the section on homotopy groups.

  • $\pi_m(O(N))$: $O(N)$ consists of two connected components which are both diffeomorphic to $SO(N)$. So $\pi_0(O(N)) = \mathbb{Z}_2$, $\pi_0(SO(N)) = 0$, and for $m \geq 1$, $\pi_m(O(N)) = \pi_m(SO(N))$.

  • $\pi_m(\operatorname{Spin}(N))$: note that $\operatorname{Spin}(N)$ is a double cover of $SO(N)$. When $N = 1$, we see that $\operatorname{Spin}(1) = \mathbb{Z}_2$ so $\pi_0(\operatorname{Spin}(1)) = \mathbb{Z}_2$ and all its other homotopy groups are trivial, while for $N = 2$ we have $\operatorname{Spin}(2) = S^1$ which has first homotopy group $\mathbb{Z}$ and all higher homotopy groups trivial. When $N \geq 3$, $\operatorname{Spin}(N)$ is the universal cover of $SO(N)$ so $\pi_1(\operatorname{Spin}(N)) = 0$ and for $m \geq 2$, $\pi_m(\operatorname{Spin}(N)) = \pi_m(SO(N))$.

This post means to ask the homotopy groups of projective Lie groups $PO(N)\equiv\frac{O(N)}{Z(O(N))}$, $PSO(N)\equiv\frac{SO(N)}{Z(SO(N))}$, and $PSpin(N)\equiv\frac{Spin(N)}{Z(Spin(N))}$ where $Z(G)$ means the center subgroup of $G$.

Note that (please double check):

$$Z(SO(n))=\begin{cases}\mathbb{Z}_2,&n \text{ even but $n \neq 2$}\\ SO(2), & n=2.\\ 0,&n \text{ odd}\end{cases}$$

$$Z(O(n))=\mathbb{Z}_2.$$

$$Z(Spin(n))=\begin{cases}\mathbb{Z}_2,&n\equiv1,3\,\mod 4\\ \mathbb{Z}_4&n\equiv 2\,\mod 4\\ \mathbb{Z}_2\oplus\mathbb{Z}_2&n\equiv0\;\mod 4\end{cases}$$

  • $\pi_m(PSO(N))$:

$$\pi_m(PSO(N))=?$$

  • $\pi_m(PO(N))$: $$\pi_m(PO(N))=?$$

  • $\pi_m(PSpin(N))$: $$\pi_m(PSpin(N))=?$$

It seems that we have the group isomorphism $PSO(N)\cong PO(N) \cong PSpin(N)$, so we can answer all questions at once.

Can $\pi$ be defined in a p-adic context?

Posted: 31 Jul 2021 07:31 PM PDT

I am not at all an expert in p-adic analysis, but I was wondering if there is any sensible (or even generally accepted) way to define the number $\pi$ in $\mathbb Q_p$ or $\mathbb C_p$.

I think that circles, therefore also angles, are problematic in a p-adic context, but $\pi$ appears in many other contexts. Of course there are many known series that sum to $\pi$, some may converge p-adically, but those that converge may have different limits, and I think some more motivation would be needed to designate one as an analog of $\pi$.

Maybe one could find an analog based on $e^{n\pi i} = (-1)^n$, or even $\int_{\mathbb R} e^{-x^2/2}dx = \sqrt\pi$. So my question:

Is there or are there p-adic definitions of $\pi$? If not, could we sensibly define $\pi_p$, and how?

How are tautologies formally constructed in propositional logic if they rely on axioms?

Posted: 31 Jul 2021 07:45 PM PDT

I realise that an expression like $x=x$ is not tautological in propositional logic due to the fact that "=" is not formally defined as a propositional connector. However, what I am confused about is how these propositional connectors are defined in the first place, because surely if these connectors are defined as axioms, they are assumed to be true, and so anything constructed from them would also be based on assumed axioms, including such tautologies. So whilst true statements can be constructed, I don't really understand how these can be tautologies, since they rely on axioms (defining the propositional connectors) that are assumed true, and are not then true inherently. So I was just wondering how this was got around, and how defining propositional connectors work without ending up in the same situation as the = sign in x=x.

Apologies in advance if this is a silly question.

Question in generated sigma algebra

Posted: 31 Jul 2021 06:54 PM PDT

In measure theory, they say that,
$\mathcal{F(C)}$ is a $\sigma$-algebra generated by C.

And,
(i) $\mathcal{F(C)} = \underset{\mathcal{C \subseteq P, \ P \ is \ a \ \sigma-algebra}}{\bigcap}$$\mathcal{P}$,
(ii) $\mathcal{F(C)}$ is the smallest $\sigma$-algebra containing C.

I'm having some conflicts in accepting it. Infact, for example:
Let,
$X = \left\{ a,b,c,d,e \right\}$
$C = \left\{ a,b,c \right\}$

$\mathcal{F(C)} = \left\{\emptyset, X, \left\{ a \right\}, \left\{ b \right\}, \left\{ c \right\}, \left\{ a,b \right\}, \left\{ a,b,c \right\}, \left\{ a,c \right\}, \left\{ d,e \right\}, \left\{ a,b,d,e \right\}, \left\{ a,c,d,e \right\},\left\{ b,d,e\right\}, \left\{ c,d,e\right\}, \left\{ a,d,e \right\} , ...\right\}$
$\mathcal{F(\left\{ d, e \right\})} = \left\{ \emptyset, X, \left\{ d\right\}, \left\{ e\right\}, \left\{ d,e\right\}, \left\{a,b,c \right\}, \left\{ a,b,c,e \right\}, \left\{ a,b,c,d\right\} \right\}$

Clearly, $\mathcal{F(\left\{ d, e \right\})}$ contains less elements to that of $\mathcal{F(C)}$.
But how is that possible since $\mathcal{F(C)} = \underset{\mathcal{C \subseteq P, \ P \ is \ a \ \sigma-algebra}}{\bigcap}$$\mathcal{P}$ and $\mathcal{F(\left\{ d, e \right\})}$ is one of the $\mathcal{P}'s$?

Edit based on the comments of @ArturoMagidin:
$C = \left\{ \left\{a\right\},\left\{b\right\},\left\{c\right\} \right\}$ (since $C$ should be the collections of subsets of $X$)

$\mathcal{F(C)} = \left\{\emptyset, X, \left\{ a \right\}, \left\{ b \right\}, \left\{ c \right\}, \left\{ a,b \right\}, \left\{ a,b,c \right\}, \left\{ a,c \right\}, \left\{ d,e \right\}, \left\{ a,b,d,e \right\}, \left\{ a,c,d,e \right\},\left\{ b,d,e\right\}, \left\{ c,d,e\right\}, \left\{ a,d,e \right\} , ...\right\}$

$\mathcal{F(\left\{ \left\{d\right\},\left\{e\right\} \right\})} = \left\{ \emptyset, X, \left\{ d\right\}, \left\{ e\right\}, \left\{ d,e\right\}, \left\{a,b,c \right\}, \left\{ a,b,c,e \right\}, \left\{ a,b,c,d\right\} \right\}$

Since $\mathcal{F(\left\{ \left\{d\right\},\left\{e\right\} \right\})}$ doesn't contain $C$, it's not one of intersecting $P's$.

Optimization problem that involve square root in the constraint?

Posted: 31 Jul 2021 07:10 PM PDT

I am a researcher in Telecommunication and currently I am running into a difficult optimization with a huge square root.

Minimize $A$ such that the following constraints $C_1$ and $C_2$ are satisfied $\begin{gathered} {C_1}:{A_{Min}} \leqslant A \leqslant {A_{Max}} \hfill \\ {C_2}:\frac{{a{A^2}\sin (\theta )\left( {\cos (\theta )\sqrt {\frac{{{B^2}\left( {\frac{1}{{{{\sin }^2}(\theta )}}} \right)}}{{{A^2}}} - 1} + \sin (\theta )} \right) - a{B^2}}}{{{A^2} - {B^2}}} + D \leqslant \frac{{a{C^2}\sin (\theta )\left( {\cos (\theta )\sqrt {\frac{{{B^2}\left( {\frac{1}{{{{\sin }^2}(\theta )}}} \right)}}{{{C^2}}} - 1} + \sin (\theta )} \right) - a{B^2}}}{{{C^2} - {B^2}}} \hfill \\ \end{gathered}$

$\theta$ is an angle that satisfied $0 \leqslant \theta \leqslant \frac{\pi }{2}$

$a,A,B,C$ are all positive

$D$ is strictly negative

$A_{min}$ and $A_{max}$ are the upper bound and lower bound of $A$ respectively.

From the first glance, it seems that there are a lot of symmetry in the left and right hand side of constraint $C_2$ of this problem but I do not know how to exploit it.

The only effort so far is to get rid of the trigonometry function by using the Weierstrass substitution by letting $t = \tan \left( {\frac{\theta }{2}} \right)$ where $t \in \left[ {0,1} \right]$ because $0 \leqslant \theta \leqslant \frac{\pi }{2}$. After that, we have the following: $\begin{gathered} \sin (\theta ) = \frac{{2t}}{{1 + {t^2}}} \hfill \\ \cos (\theta ) = \frac{{1 - {t^2}}}{{1 + {t^2}}} \hfill \\ \end{gathered}$

With the Weierstrass substitution, I guess the optimization problem is now purely in polynomial form but nonetheless still posed a major challenge for me. I have some guts feeling that this problem might have something to do with all the square term (i.e quadratic optimization flavor).

Constraint $C_2$ then becomes $\frac{{a{A^2}(2t)\left( {\frac{{{S_1}\left( {1 - {t^2}} \right)}}{{{t^2} + 1}} + \frac{{2t}}{{{t^2} + 1}}} \right)}}{{\left( {{t^2} + 1} \right)\left( {{A^2} - {B^2}} \right)}} + D \leqslant \frac{{a{C^2}(2t)\left( {\frac{{{S_2}\left( {1 - {t^2}} \right)}}{{{t^2} + 1}} + \frac{{2t}}{{{t^2} + 1}}} \right)}}{{\left( {{t^2} + 1} \right)\left( {{C^2} - {B^2}} \right)}}$

Where $\begin{gathered} {S_1}^2 = \frac{{{B^2}{{\left( {\frac{{1 + {t^2}}}{{2t}}} \right)}^2}}}{{{A^2}}} \hfill \\ {S_2}^2 = \frac{{{B^2}{{\left( {\frac{{1 + {t^2}}}{{2t}}} \right)}^2}}}{{{C^2}}} \hfill \\ \end{gathered} $

Please help me with this, thank you for your enthusiasm !

Clarification:

1/ Regarding the question of fixed and varied parameter. $A$ is the decision variable that needs to be minimized and the rest of the variable $a,B,C$ are fixed (Physical interpretation: they are some measurements value that can be captured very quickly and accurate so I considered them as fixed value). The answer of this problem should be something in the form $A$ equal to some combination of the other fixed parameters.

Also,${A_{\min }}$ and ${A_{\max }}$ are fixed values depend on the transmission standard and not obtain from measure.

However, in practice the angle $\theta$ will also varied as well albeit very very slowly. Therefore, I plan to solve for the case when $\theta$ is fixed first.

2/ It is not always possible to guarantee ${A^2} - {B^2}$ to be positive but if this assumption make the analysis easier I think it is possible to accept it although it limit the generality of the analysis.

Note that, from my domain knowledge $0<B<1$ is always guarantee.

Also, I think it is fine to assume $C^2 - B^2$ to be positive too.

$(1+A)^n \ge (1+a_1)\cdot(1+a_2)\cdots(1+a_n) \ge (1+G)^n$

Posted: 31 Jul 2021 07:29 PM PDT

$A$ and $G$ are arithmetic mean and the geometric mean respectively of $n$ positive real numbers $a_1$,$a_2$,$\ldots$,$a_n$ . Prove that

  1. $(1+A)^n$$\ge$$(1+a_1)\cdot(1+a_2)\cdots(1+a_n)$$\ge$$(1+G)^n$
  2. if $k$$\gt$$0$, $(k+A)^n$$\ge$$(k+a_1)\cdot(k+a_2)\cdots(k+a_n)$$\ge$$(k+G)^n$

My try (Ques -1): To prove $(1+a_1)\cdot(1+a_2)\cdots(1+a_n)$$\ge$$(1+G)^n$

$(1+a_1)\cdot(1+a_2)\cdots(1+a_n)=1+\sum{a_1}+\sum{a_1}\cdot{a_2}+\sum{a_1}\cdot{a_2}\cdot{a_3}+\cdots+(a_1\cdot{a_2}\cdots{a_n})$

Consider $a_1,a_2,\ldots,a_n$ be $n$ positive real numbers and then apply AM$\ge$GM

$\frac{(a_1+a_2+\cdots+a_n)}{n}\ge(a_1\cdot{a_2}\cdots{a_n})^\frac{1}{n}$ imples $\sum{a_1}\ge n\cdot{G}$ because $G=(a_1\cdot{a_2}\cdots{a_n})^\frac{1}{n}$

again consider $(a_1\cdot{a_2}),(a_1\cdot{a_3}),\ldots({a_1}\cdot{a_n}),(a_2\cdot{a_3}),(a_2\cdot{a_4})\ldots,(a_2\cdot{a_n}),(a_3\cdot{a_4})\ldots(a_{n-1}\cdot{a_n})$ be $\frac{n(n-1)}{2!}$ positive real numbers.

Then Applying AM$\ge$GM , we get, $\frac{\sum{a_1}\cdot{a_2}}{\frac{n(n-1)}{2!}}$ $\ge$ $\bigl(a_{1}^{n-1}\cdot{a_{2}^{n-1}}\cdots{a_{n}^{n-1}}\bigl)^\frac{2!}{n(n-1)}$ implies $\sum{a_1}\cdot{a_2}\ge \frac{n(n-1)}{2!}G^2$

Similary , $\sum{a_1}\cdot{a_2}\cdot{a_3}\ge \frac{n(n-1)(n-2)}{3!}G^3$ and so on.

Therefore, $(1+a_1)\cdot(1+a_2)\cdots(1+a_n)\ge 1+nG+\frac{n(n-1)}{2!}G^2+\frac{n(n-1)(n-2)}{3!}G^3+\cdots+G^n$

Therefore, $(1+a_1)\cdot(1+a_2)\cdots(1+a_n)\ge (1+G)^n$

To prove $(1+A)^n$$\ge$$(1+a_1)\cdot(1+a_2)\cdots(1+a_n)$

Consider $(1+a_1),(1+a_2),\ldots,(1+a_n)$ be $n$ positive real numbers and then applying AM$\ge$GM

$\frac{(1+a_1)+(1+a_2)+\cdots+(1+a_n)}{n} \ge [(1+a_1)\cdot(1+a_2)\cdot\cdots(1+a_n)]^\frac{1}{n}$

$\frac{n+a_1+a_2+\cdots{a_n}}{n}\ge [(1+a_1)\cdot(1+a_2)\cdot\cdots(1+a_n)]^\frac{1}{n}$

$(1+A)^n\ge(1+a_1)\cdot(1+a_2)\cdot\cdots(1+a_n)$

My try (Ques -2): To prove $(k+A)^n$$\ge$$(k+a_1)\cdot(k+a_2)\cdots(k+a_n)$

Consider $(k+a_1),(k+a_2)\ldots(k+a_n)$ be $n$ positive real numbers and applying AM$\ge$GM

$\frac{(k+a_1)+(k+a_2)+\cdots(k+a_n)}{n}\ge[(k+a_1)\cdot(k+a_2)\cdots(k+a_n)]^\frac{1}{n}$

$(k+A)^n\ge (k+a_1)\cdot(k+a_2)\cdots(k+a_n)$

To prove $(k+a_1)\cdot(k+a_2)\cdots(k+a_n)$$\ge$$(k+G)^n$

$(k+a_1)\cdot(k+a_2)\cdots(k+a_n)=k^n+k^{n-1}\sum{a_1}+k^{n-2}\sum{a_1\cdot{a_2}}+\cdots+(a_1\cdot{a_2}\cdots{a_n})$

now

$\sum{a_1}\ge n\cdot{G}$,

$\sum{a_1}\cdot{a_2}\ge \frac{n(n-1)}{2!}G^2$

$\sum{a_1}\cdot{a_2}\cdot{a_3}\ge \frac{n(n-1)(n-2)}{3!}G^3$ and so on.

Therefore

$(k+a_1)\cdot(k+a_2)\cdots(k+a_n)\ge k^n+nk^{n-1}G+\frac{n(n-1)}{2!}k^{n-2}G^2+\cdots+G^n$

Therefore, $(k+a_1)\cdot(k+a_2)\cdots(k+a_n)\ge (k+G)^n$

My Ques :

  1. Have I solved the questions correctly?
  2. How to prove $\sum{a_1}\cdot{a_2}\cdot{a_3}\ge \frac{n(n-1)(n-2)}{3!}G^3$

Thanks

How can I prove a line can't meet an ellipse in more than two points

Posted: 31 Jul 2021 07:39 PM PDT

Without knowing the equation for an ellipse, but rather just with the geometric definition i.e. the ellipse is the locus of points with $PF_1 + PF_2 = k > F_1F_2$.

I tried to use a classic construction: $\omega = \odot(F_1,k)$, so for $Q \in \omega$, the point $P$ of meeting of the perpendicular bisector between $Q$ and $F_2$, with $QF_1$ is in the ellipse.

By the converse: given $P$ in the ellipse, the point $Q = PF_1 \cap \odot(P,PF_2)$, not in between $P$ and $F_1$, is in $\omega$.

The problem is that this last operation transforms lines in weird curves, while I was hoping it would transform lines in another lines and thus, a line would meet a circle in more than two points.

Notice that it is not obvious from this definition that an ellipse can be transformed in a circle by dilation on some axis.

Over skepticism in math [closed]

Posted: 31 Jul 2021 07:10 PM PDT

Hello I am undergraduate who is trying to self teach linear algebra. I am reading robert beezer linear algebra book. My problem is that I read the first 80 pages easily but cant move farward in the book as I get stuck revisiting old proofs making sure I understand each step well. I also sometimes find that old proofs is missing steps so I complete them with my own ,but I am scared if my completion proof is not rigorous enough or is missing something. Do any one have any advice on how not to waste time and start completing the book. Thanks in advance

What is the distribution of the dot product of a flat Dirichlet vector with a fixed vector?

Posted: 31 Jul 2021 07:01 PM PDT

I am using this question as a reference What is the distribution of the dot product of a Dirichlet vector with a fixed vector?

I have the following problem, $S = w \cdot C$ where vector $w$ is random with components having an $N$-dimensional flat Dirichlet distribution, where $a=1$ (just uniform distribution above the $(N-1)$-simplex)

How can I write directly the PDF of this function?

Expected Value Of a Random Grid That Contains Multipliers

Posted: 31 Jul 2021 06:51 PM PDT

I am a software developer and recently came across this problem when working on a hobby project, and my knowledge of probability is too small to solve this.

I have a bag that contains 5 negative ones, 5 zeros, 10 ones, 5 twos, 5 threes. The tiles are set randomly on a 5 by 5 grid with 5 tiles left in the bag. The threes in the bag also multiply all 8 of the adjacent squares by 2 (A number is multiplied by $2^n$ adjacent 3's). What is the best way to calculate the expected value of the grid for any known bag that can contain a random amount of any numbers? Bonus points if it can handle any size (and rectangular shape) of grid.

Eg. Bag ( 2 1 3 3 3 ), Grid Value: 27

$$\begin{array}{c|c|c|c|c} 3&2&1&-1&1\\ \hline 1&1&0&0&-1\\ \hline 1&0&3&2&2\\ \hline -1&0&-1&2&1\\ \hline 1&-1&1&1&0 \end{array}$$

Without the multiplier, the solution is quite simple:

The number of values in the grid (25) × The sum of all of the numbers in the bag / the number of numbers in the bag. Or $(n * s) / a$

Since they do not interact, the fact they are in a grid does not even matter. For the same reason, it's not that useful for solving this problem, but I thought I would mention it as a start.

A kinda related question of mine

Is there a better solution for $\mathrm{\int (a^t)^{(a^t)}dt= C+t+\frac1{ln(a)}\sum_{n=0}^\infty \frac{(-1)^n Q(n+1,-nt\,ln(a))}{n^{n+1}}dt}$?

Posted: 31 Jul 2021 07:28 PM PDT

I know there exist functions like this one for simplifying tetration based sums. There may be a way to simplify this type of sum at least using a lesser known and widely accepted functions. Here are some results as proof: graphical visualization of results and calculation of special case and integral version of special case.

It was a nice idea to find the sophomore's dream, but I thought that it would be more interesting if I could find a "generalized exponential sophomore's dream". This post uses the following functions: regularized gamma functions, the exponential integral function, and tetration. There was an annoying discontinuity at n=0 hence the constant term:

$$\mathrm{\int_0^b \, ^2\left(a^t\right)dt=\int_0^b a^{ta^t}dt=b+\sum_{n=1}^\infty\frac{ln^n(a)}{n!}\int_0^b t^n a^{tn}dt=\boxed{\mathrm{b+\frac1{ln(a)}\sum_{n=1}^\infty\frac{(-1)^nP\big(n+1,-n\,b\,ln(a)\big)}{n^{n+1}}}}\implies \int_{-\frac1e}^0 e^{{te}^t}dt=1+\sum_{n=1}^\infty \frac{(-1)^nP(n+1,n)}{n^{n+1}}=0.77215…}$$

$$\mathrm{\implies A(a,t)\mathop=^\text{def}\int \,^2\left(a^t\right)dt=C+t+\frac1{ln(a)}\sum_{n=0}^\infty \frac{(-1)^n Q(n+1,-nt\,ln(a))}{n^{n+1}}=\quad C+t-t\sum_{n=0}^\infty \frac{(t\,ln(a))^n E_{-n}(-nt\,ln(a))}{n!}}$$

This series reminds me of the Marcum Q function for non negative integers: $$\mathrm{Q_m(a,b)=1-e^{-\frac{a^2}2}\sum_{n=0}^\infty\left(\frac{a^2}{2}\right)^n\frac{P\left(m+n,\frac{b^2}2\right)}{n!}}$$

This representation is good, but is quite tedious to use as the formula requires summing an infinite amount of regularized gamma functions. I would like to find a way to get rid of the summation .I see the gamma function with powers which reminds me of the summation definition of a hypergeometric function. I would even like to see the use of a generalized hypergeometric function like Meijer G or Kampé de Fériet functions seen in the link. Please correct me and give me feedback!

Result using @Arjun Vyavaharkar's function Wi(x). Perhaps it means W-Lambert Integral function:

I, hopefully, found a summation form for $$\mathrm{Wi(x)=\sum_{n\ge1}n^{n-1}P(n+1,-ln(x)),|ln(x)|<\frac1e}$$Here is an interactive graph. P(a,b) is the Regularized Lower Incomplete Gamma function and the expansion used for W(x).

Delta function and $\sum_{t}\exp\{ i k t\} $

Posted: 31 Jul 2021 07:45 PM PDT

I am interested in evaluating the following, which appears in an integrand, $$ \sum_{t=0}^{\infty}e^{ikt} $$ where $k$ is real. Using the following relation $$ \sum_{t=-\infty}^{\infty}e^{ikt} = 2\pi\delta(k), $$ the real part of it is

\begin{eqnarray} \Re\left[\sum_{t=0}^{\infty}e^{ikt}\right]&=&\frac{1}{2}+\frac{1}{2}\Re\left[\sum_{t=-\infty}^{\infty}e^{ik}\right]\\&=&\frac{1}{2}+\pi\delta(k). \end{eqnarray}

Note the addition of $\frac{1}{2}$ term! This $\frac{1}{2}$ term does not agree with the following derivation using $\epsilon$ trick!

\begin{eqnarray} \lim_{\epsilon\rightarrow0_{+}}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}e^{ikt}e^{-\epsilon t}&=&\lim_{\epsilon\rightarrow0_{+}}\lim_{T\rightarrow\infty}\frac{e^{\epsilon}-e^{ik\left(T+1\right)}e^{-\epsilon T}}{e^{\epsilon}-e^{ik}}\\ &=&\lim_{\epsilon\rightarrow0_{+}}\frac{1}{e^{\epsilon}-e^{ik}} \end{eqnarray}

which becomes, when $k \rightarrow 0$,

\begin{eqnarray} \lim_{k\rightarrow0,}\lim_{\epsilon\rightarrow0_{+}}\frac{1}{e^{\epsilon}-e^{ik}}&=&\lim_{k\rightarrow0,}\lim_{\epsilon\rightarrow0_{+}}\frac{1}{\epsilon-ik}\\ &=&\lim_{k\rightarrow0}\lim_{\epsilon\rightarrow0_{+}}\frac{i}{k+i\epsilon}\\&=&\lim_{k\rightarrow0}\left[\pi\delta\left(k\right)+i\mathcal{P}\frac{1}{k}\right] \end{eqnarray}

where Sokhotsky's formula is used.

Now when $k\nrightarrow0$, $$ \lim_{\epsilon\rightarrow0_{+}}\frac{1}{e^{\epsilon}-e^{ik}}=\frac{1}{1-e^{ik}} $$

Therefore, for all $k$, $$ \sum_{t=0}^{\infty}e^{ikt}=\pi\delta\left(k\right)+\mathcal{P}\frac{1}{1-e^{ik}} $$

Note that the real part of it does not have $\frac{1}{2}$ addition term, instead has $\Re\mathcal{P}\frac{1}{1-e^{ik}}$.

What am I doing wrong?


Building on Svyatoslav's answer which in turn supported by vitamin d, the conclusion is the following: $$ \sum_{t=0}^{\infty}e^{ikt} = \frac{1}{2} + \pi \delta(k) + \frac{i}{2} \mathcal{P} \cot (k/2) $$

Solving multivariate quadratic equations over the integers

Posted: 31 Jul 2021 07:00 PM PDT

I am looking for a method (if it exists) to solve over the integers the following sum of squares equation:

$$ x_1^2 + x_2^2+x_3^2 + \cdots + x_n^2 = m,$$ with $m \in \mathbb{N}.$

Someone has any idea about books, articles dealing with this kind of problem?

Thanks in advance!

Isometry and its inverse

Posted: 31 Jul 2021 07:03 PM PDT

I got this affine map:

$$ f: R^3 \rightarrow R^3: \begin{pmatrix}x\\ y\\ z\\\end{pmatrix} \rightarrow A \cdot \begin{pmatrix}x\\ y\\ z\\\end{pmatrix} + \begin{pmatrix}0\\ -1\\ 1\\\end{pmatrix} $$

with $$ A = \begin{bmatrix} 1 & a_{12} & a_{22}\\ 0 & 1 & a_{21}\\ 0 & 0 & 1 \end{bmatrix}$$

Also given was this information about the inverse (which should also be an affine map?): $$ g=f^{-1}: R^3 \rightarrow R^3: \begin{pmatrix}x\\ y\\ z\\\end{pmatrix} \rightarrow B \cdot \begin{pmatrix}x\\ y\\ z\\\end{pmatrix} + \bar b $$

I have to find $\bar b$. Does anyone have an idea how to find it? I tried inputting some values but I can't seem to get there.

No comments:

Post a Comment