Thursday, May 26, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Proving a supremum/maximum identity

Posted: 26 May 2022 02:16 PM PDT

Let $\mathbf{S}$ and $\mathbf{U}$ be respectively the stable and unstable subspaces defined by a hyperbolic isomorphism $T \in \mathrm{Gl}(\mathbb{R}^n)$. Let $\mathcal{B}_{\mathbf{S}}$ be the basis of $\mathbf{S}$ such that the matrix of $T|_{\mathbf{S}} = T_{\mathbf{S}}$ under this basis is $$ M(\epsilon) = \left(\begin{array}{cc} \begin{array}{ccc} A_1(\epsilon) & & \\ &\ddots& \\ & & A_s(\epsilon) \end{array} & \\ & \begin{array}{ccc} B_1(\epsilon) & & \\ &\ddots& \\ & & B_r(\epsilon) \end{array} \end{array}\right), $$ where $$ \begin{array}{ccc} A_j(t) = \left(\begin{array}{cccccc} \lambda_j & \epsilon & 0 &\cdots& 0 & 0 \\ 0 & \lambda_j & \epsilon &\cdots& 0 & 0 \\ 0 & 0 & \lambda_j &\cdots& 0 & 0 \\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0 & 0 & 0 &\cdots& \lambda_j & \epsilon \\ 0 & 0 & 0 &\cdots& 0 & \lambda_j \end{array}\right) &\textrm{e}& B_k(t) = \left(\begin{array}{cccccc} D_k & \epsilon I_2 & 0 &\cdots& 0 & 0 \\ 0 & D_k & \epsilon I_2 &\cdots& 0 & 0 \\ 0 & 0 & D_k &\cdots& 0 & 0 \\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0 & 0 & 0 &\cdots& D_k & \epsilon I_2 \\ 0 & 0 & 0 &\cdots& 0 & D_k \end{array}\right) \end{array} $$ with $$ \begin{array}{ccc} D_k = \left(\begin{array}{cc} \alpha_k & \beta_k \\ -\beta_k & \alpha_k \end{array}\right) & \epsilon I_2 = \left(\begin{array}{cc} \epsilon & 0 \\ 0 & \epsilon \end{array}\right) & 0 = \left(\begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array}\right). \end{array} $$ Defining an inner product $\langle\cdot,\cdot\rangle_2$ such that $\mathcal{B}_{\mathbf{S}}$ is orthonormal and consider $\|\|_2$ the norm induced by this inner product. We have that the operator norm of some $R \in \mathcal{L}(\mathbf{S})$ is given by $$ \|R\|_{\mathrm{O}2} = \sup_{\|\xi\|_2 \leq 1} \|R\xi\|_2. $$ Repeat the same process as above for $\mathbf{U}$. Since $\mathbb{R}^n = \mathbf{S} \oplus \mathbf{U}$, $\mathcal{B} = \mathcal{B}_{\mathbf{S}} \cup \mathcal{B}_{\mathbf{U}}$ is a basis for $\mathbb{R}^n$, so we can define a norm $\|\cdot\|_1$ in $\mathbb{R}^n$ given by $\|\xi\|_1 = \max\{\|\xi_{\mathbf{S}}\|_2,\|\xi_{\mathbf{U}}\|_3\}$ where $\xi_{\mathbf{S}}$ and $\xi_{\mathbf{U}}$ are respectively the stable and unstable components of the vector $\xi$.

The operator norm of some $P \in \mathcal{L}(\mathbb{R}^n)$ is given by $$ \|R\|_{\mathrm{O}1} = \sup_{\|\xi\|_1 \leq 1} \|R\xi\|_1. $$ Is it possible to show that $$ \sup_{\max\{\|\xi_\mathbf{E}\|_{2},\|\xi_\mathbf{U}\|_{3}\} \leq 1}\max\{\|T_\mathbf{E}\xi_\mathbf{E}\|_{2},\|T_\mathbf{U}\xi_\mathbf{U}\|_{3}\} = \max\left\{\sup_{\|\xi_\mathbf{E}\|_{2} \leq 1}\|T_\mathbf{E}\xi_\mathbf{E}\|_{2},\sup_{\|\xi_\mathbf{U}\|_{3} \leq 1}\|T_\mathbf{U}\xi_\mathbf{U}\|_{3}\right\}? $$

I tried assuming $\max\{\|\xi_\mathbf{E}\|_{2},\|\xi_\mathbf{U}\|_{3}\} = \|\xi_\mathbf{E}\|_{2}$ and I got nothing as a result.

If someone can clear the path for me it would be greatly appretiated.

Number of elements with order 2 in a finite abelian group

Posted: 26 May 2022 02:13 PM PDT

Suppose $G$ is an abelian finite group, and the number of order-2 elements in $G$ is denoted by $N$.

I: The subset of all elements with order 2 union $\{e \}$ is a subgroup of $G$ because for all $x \ne y: (xy)^2=x^2y^2=e$, and all elements are self inverse. Therefore, $N+1$ divides |G| by the Lagrange theorem.

II: If $N=1$ (i.e. there is only one element of order 2 in $G$), namely $x$, everything will be fine. However, if we have $x$ and $y$ as elements of order 2 in $G$, then $xy$ has order 2 and $N=3$. If there exists another element of order 2 in $G$, namely $z$, then $xz,\ yz,\ xyz$ have order 2 and $N=7$. If there exists another element of order 2 in $G$, namely $w$, then $xw,\ yw,\ zw,\ xyw,\ xzw,\ yzw, \ xyzw$ have order 2 and $N=15$. By induction, $N=$$n\choose{1}$$+\cdots +$${n}\choose{n}$ $= \ 2^n-1$ for some $n$.

With I and II, $N= 2^n-1$ for some $n$ that satisfy $2^n| \ |G|$.

Obviously, if |G| is odd, $N=0$. Or if $|G|=36$, $N=1$ or $N=3$.

Is what I wrote correct?

Can we be more specefic about the number of elements of order 2 in $G$?

Let $(f_n)$ be a pointwise bounded sequence of l.s.c. convex functions. Then $(f_n)$ has a convergent subsequence

Posted: 26 May 2022 02:13 PM PDT

Disclaimer: This thread is meant to record. See: SE blog: Answer own Question and MSE meta: Answer own Question. Anyway, it is written as problem. Have fun! :)


Let $C$ be an open convex subset of a Banach space $X$ and $\mathcal F$ a collection of real-valued continuous functions on $C$. We say that $\mathcal{F}$ is pointwise bounded if, for each $x \in C$, the set $\{f(x) \mid f \in \mathcal{F}\}$ is bounded.

Theorem: Assume $X$ is separable. Let $(f_n)$ be a pointwise bounded sequence of l.s.c. convex functions on $C$. Then there exists a subsequence $(f_{\varphi(n)})$ of $(f_n)$ that converges pointwise and uniformly on compact subsets to a continuous convex function on $C$.

Prove inequality $|f(x) - f(y)| \leq |x-y|$

Posted: 26 May 2022 02:13 PM PDT

How can I prove that:

$$ |\frac{1}{\sqrt{a^2 + 1}} - \frac{1}{\sqrt{b^2 + 1}}| <= |a - b| \hspace{1cm} \forall a,b > 0$$

Any help is appreciated

Defintion of algebra of functions?

Posted: 26 May 2022 02:09 PM PDT

In the book "Information Geomery" by Nihat Ay et. al, Chapter 2.1 on page 25 starts as following:

We consider a non-empty and finite set $I$ [...] The real algebra of functions $I \rightarrow \mathbb R$ is denoted by $\mathcal F(I)$, and its unity $\mathbb 1_{I}$ or simply $1$ is given by $\mathbb{1}(i) = 1$, $i\in I$.

What exactly is an algebra of functions? I've searched the Internet, but found nothing yet. Any reference would be appreciated.

The N-barrel problem. From some national math olympiad.

Posted: 26 May 2022 02:03 PM PDT

Someone told me in a math olympiad a very interesting problem, it goes like this: Suppose you have a circular platform with N barrels arranged in a "nice" order, each barrel will be the vertex of a regular N-gon. Inside each barrel is a light bulb, which can be on or off, and you can turn it on or off with a button, but you know neither the current nor the original state. Each round, the platform will rotate, so that you don't know which barrel is which, but the relative position will be preserved (if N=3, B1,B2,B3 => B3,B1,B2 => B2, B3, B1...). Your goal is to turn on all lights and you have to give a strategy and say for which N is possible. I have managed to make it possible for all powers of 2, but I have no idea. When you have managed to light all the bulbs you will be warned, and you have to do it in a finite number of steps. I put the demonstration example for N=2 in case I didn't make myself clear, let's assume all the bulbs are on, then you've already won. If they are all off just press the button on the two barrels, they will light up and you have won. If one bulb is on and the other off the first thing you have to do is press all the buttons to check which case you are in, as you have not won, you know it is because there is one bulb on and one off, if you press any button, what you will do is convert that case in which all are on or in which all are off, if it is the former you have won, if it is the latter you will have to press all the buttons and you will win in the next round. If anyone can tell me what olympiad this comes from and how to solve it (in a nice way xd) I would appreciate it. I'm convinced there has to be a very elegant solution to solve this using group theory. This deserves a video from 3Blue1Brown xd, we are waiting for you..... English is not my first lenguage, so you may have trouble understanding this... Sorry

Prove the equivalence of an upper bounded non empty set

Posted: 26 May 2022 02:15 PM PDT

Be $M \subseteq\mathbb{R}$ non empty and upper boundend. Show for $s \in \mathbb{R}$ the following equivalence:

$ s = sup \ M \Longleftrightarrow$ s is an upper bound of $M$ and there exists a sequennce $(a_n)_{n \in \mathbb{N}}$ with $n \in \mathbb{N}$ such that $lim_{n \rightarrow \infty} \ a_n = s$

It's clear to me that I have to show the two directons:

$\rightarrow$ my idea: It is clear that, if $s=sup \ m$, then $s$ is an upper bound of $M$. Take $ε>0$. Since $s$ is the least upper bound of $M, s−ε$ is not an upper bound of $M$, which means that there is a $x \in A$ such that $x⩾s−ε$. Actually I have no clue how can I continue it and I even don't know the $\leftarrow$ direction. I appreciate any kind of help.

Direct sum of tangent spaces of smooth manifolds

Posted: 26 May 2022 02:09 PM PDT

Let $M$ and $N$ be smooth manifolds of dimension $m$ and $n$ respectively and for $p \in M$ denote \begin{equation} \mathcal{T}_p M = \lbrace \gamma \in C^{\infty}(I,M) | 0 \in I \land \gamma(0) = p \rbrace. \end{equation} Also let $\sim$ be a equivalence relation on $\mathcal{T}_p M$ defined by \begin{equation} c_1 \sim c_2 : \iff \exists (U_{\varphi},\varphi) \in \mathcal{A}_M: p \in U_{\varphi} \land \frac{d}{dt}(\varphi \circ c_1)(0) = \frac{d}{dt}(\varphi \circ c_2)(0) \end{equation} where $\mathcal{A}_M$ is a atlas obtained by the smooth structure of $M$ then the tangent space $T_p M$ of $M$ at $p$ is defined by \begin{equation} T_p M := \mathcal{T}_p M / \sim. \end{equation} For $(U_{\varphi}, \varphi) \in \mathcal{A}_M$ define a map $t^M_p \varphi : T_p M \to \mathbb{R}^m$ for $[c]_{\sim} \in T_p M$ by \begin{equation} t^M_p\varphi([c]_{\sim}) := \frac{d}{dt}(\varphi \circ c)(0). \end{equation} Now the product Manifold $M \times N$ can be equipped with the product atlas which is contained in a unique maximal atlas hence $M \times N$ is a smooth manifold. For $(p,q) \in M \times N$ the tangent space is then given by $T_{(p,q)}M \times N = \mathcal{T}_{(p,q)}M \times N / \sim$. For $(U_{\varphi_1}, \varphi_1) \in \mathcal{A}_M$ and $(U_{\varphi_2}, \varphi_2) \in \mathcal{A}_N$ define a map $t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2) : T_{(p,q)}M \times N \to \mathbb{R}^{m+n} $ by \begin{equation*} t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)(([\gamma_1]_{\sim}, [\gamma_2]_{\sim})) = \left( t^{M}_{p}\varphi_1([\gamma_1]_{\sim}), t^{N}_{q}\varphi_2([\gamma_2]_{\sim})\right) \end{equation*} for two curves $\gamma_1 \in \mathcal{T}_p M$ and $\gamma_2 \in \mathcal{T}_q N$ with domain $I$ and $J$ respectively. Note that \begin{equation*} (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}(\gamma_1(t), \gamma_2(s)) = ((t^{M}_p \varphi_1)^{-1}(\gamma_1(t)), (t^{N}_q \varphi_2)^{-1}(\gamma_2(s)) ) \end{equation*} for some $ t \in I$ and $s \in J$ Also the tangent spaces $T_p M$ and $T_q N$ are identified with vector spaces by \begin{align} [c_1]_{\sim} + [c_2]_{\sim} &:= (t^{M}_p\varphi)^{-1}(t^{M}_p\varphi([c_1]_{\sim}) + t^{M}_p\varphi([c_2]_{\sim})), \\ \lambda[c_1]_{\sim} &:= (t^{M}_p\varphi)^{-1}(\lambda t^{M}_p\varphi([c_1]_{\sim})) \end{align} for $[c_1]_{\sim}, [c_2]_{\sim} \in T_p M$ and $\lambda \in \mathbb{C}$. The same can be done for $T_q N$ and $T_{(p,q)}M \times N$.

Here is the actual problem that I try to solve:

Now I want to find a natural isomorphism $T_p M \oplus T_q N \cong T_p M \times T_q N \cong T_{(p,q)}M \times N$. I know this can be done by defining $\alpha(v) = (d(\pi_1)_{(p,q)}(v), d(\pi_2)_{(p,q)}(v)) $. But there is also another approach which I find more interesting. So I define $\iota_{(p,q)} : T_p M \times T_q N \to T_{(p,q)}M \times N$ by \begin{equation*} \iota_{(p,q)} : ([\gamma_1]_{\sim}, [\gamma_2]_{\sim}) \mapsto [t \mapsto (\gamma_1(t), \gamma_2(t))]_{\sim} =: [(\gamma_1, \gamma_2)]_{\sim}. \end{equation*} It is easy to show that $\iota_{(p,q)}$ is injective and surjective hence bijective. But for the linearity I struggle. Let $(U_{\varphi_1}, \varphi_1) \in \mathcal{A}_M$ be a chart with $p \in U_{\varphi_1}$ and $(U_{\varphi_2}, \varphi_2) \in \mathcal{A}_N$ a chart with $q \in U_{\varphi_2}$. I got for $([\gamma_1]_{\sim},[\gamma_2]_{\sim}),([\tilde{\gamma_1}]_{\sim},[\tilde{\gamma_2}]_{\sim}) \in T_p M \times T_q N$ the following \begin{align*} &\iota_{(p,q)}(([\gamma_1]_{\sim},[\gamma_2]_{\sim}) + ([\tilde{\gamma_1}]_{\sim},[\tilde{\gamma_2}]_{\sim})) \\ &= (\iota_{(p,q)} \circ (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1})(t^{M \times N}_{(p,q)} (\varphi_1 \times \varphi_2)(([\gamma_1]_{\sim},[\gamma_2]_{\sim})) + t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)(([\tilde{\gamma_1}]_{\sim},[\tilde{\gamma_2}]_{\sim}))) ) \\ &= (\iota_{(p,q)} \circ (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}) \left( \left( \frac{d}{dt}(\varphi_1 \circ \gamma_1)(0), \frac{d}{dt}(\varphi_2 \circ \gamma_2)(0)\right) + \left( \frac{d}{dt}(\varphi_1 \circ \tilde{\gamma_1})(0), \frac{d}{dt}(\varphi_2 \circ \tilde{\gamma_2})(0)\right) \right) \\ &= (\iota_{(p,q)} \circ (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}) \left( \left(\frac{d}{dt}(\varphi_1 \circ \gamma_1)(0) + \frac{d}{dt}(\varphi_1 \circ \tilde{\gamma_1})(0), \frac{d}{dt}(\varphi_2 \circ \gamma_2)(0) + \frac{d}{dt}(\varphi_2 \circ \tilde{\gamma_2})(0) \right) \right) \\ &= (\iota_{(p,q)} \circ (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}) \left( \left( \frac{d}{dt}\left( \varphi_1 \circ \gamma_1 + \varphi_1 \circ \tilde{\gamma_1} \right)(0), \frac{d}{dt}(\varphi_2 \circ \tilde{\gamma_1} + \varphi_2 \circ \tilde{\gamma_2} )(0) \right) \right) \\ &= (\iota_{(p,q)} \circ (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}) \left( \left( \frac{d}{dt}\left( \varphi_1 \circ (\gamma_1 + \tilde{\gamma_1}) \right)(0), \frac{d}{dt}(\varphi_2 \circ (\tilde{\gamma_1} + \tilde{\gamma_2}) )(0) \right) \right) \\ &= \iota_{(p,q)} \left((t^{M}_{p}\varphi_1)^{-1}(t^M_{p}\varphi_1([\gamma_1 + \tilde{\gamma_1}]_{\sim})), (t^{N}_q \varphi_2)^{-1}(t^{N}_q \varphi_2([\gamma_2 + \tilde{\gamma_2}]_{\sim}) ) \right)\\ &= \iota_{(p,q)} \left([\gamma_1 + \tilde{\gamma_1}]_{\sim}, [\gamma_2 + \tilde{\gamma_2}]_{\sim} \right) \\ &= [(\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2})]_{\sim} \\ &= [(\gamma_1, \gamma_2) + (\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim} \\ &= [(\gamma_1, \gamma_2)]_{\sim} + [(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim} \\ &= \iota_{(p,q)}([\gamma_1]_{\sim},[\gamma_2]_{\sim}) + \iota_{(p,q)}([\tilde{\gamma_1}]_{\sim},[\tilde{\gamma_2}]_{\sim}) \end{align*} where at the end I used \begin{equation*} [(\gamma_1, \gamma_2) + (\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim} = [(\gamma_1, \gamma_2)]_{\sim} + [(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim} \end{equation*} I am not sure if this is true so I tried to justify it by the following argument:
\begin{align*} &[(\gamma_1, \gamma_2)]_{\sim} + [(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim} = (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}(t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\gamma_1, \gamma_2)]_{\sim}) ) + (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}(t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim}) ) \\ &= (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}\left(t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\gamma_1, \gamma_2)]_{\sim}), t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim}) \right) \end{align*} since $t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)$ is linear (this comes from the identification of the tangent space with a vector space ) because $t^{M}_{p}\varphi_1$ and $t^{N}_{q}\varphi_2$ are linear. Now \begin{equation*} t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\gamma_1, \gamma_2)]_{\sim}) + t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim}) \end{equation*} is a vector in $\mathbb{R}^{n + m}$ which gets represented by $(\gamma_1, \gamma_2) + (\tilde{\gamma_1}, \tilde{\gamma_2}) = (\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2})$ hence we can write \begin{equation*} t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\gamma_1, \gamma_2)]_{\sim}) + t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim}) = t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2}) ]_{\sim}) \end{equation*} thus \begin{align*} [(\gamma_1, \gamma_2)]_{\sim} + [(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim} &= (t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2))^{-1}(t^{M \times N}_{(p,q)}(\varphi_1 \times \varphi_2)([(\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2}) ]_{\sim})) \\ [(\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2} )]_{\sim}. \end{align*} I am not sure if this is the correct way of justifying the last step. I also tried justifiyng it by taking $c_1 \in [(\gamma_1, \gamma_2)]_{\sim}$ and $c_2 \in [(\tilde{\gamma_1}, \tilde{\gamma_2})]_{\sim}$ then showing it by definition \begin{align*} \frac{d}{dt}((\varphi_1 \times \varphi_2)(c_1 + c_2))(0) &= \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ c_1 ) (0) + \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ c_2 ) (0) \\ &= \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ (\gamma_1, \tilde{\gamma_1}) )(0) + \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ (\gamma_2, \tilde{\gamma_2}) )(0)\\ &= \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ ((\gamma_1, \tilde{\gamma_1}) + (\gamma_2, \tilde{\gamma_2})))(0) \\ &= \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ (\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2}) )(0) \end{align*} and if $c \in [(\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2})]_{\sim}$ then \begin{align*} \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ c)(0) &= \frac{d}{dt}((\varphi_1 \times \varphi_2) \circ (\gamma_1 + \tilde{\gamma_1}, \gamma_2 + \tilde{\gamma_2}))(0) \\ &= \frac{d}{dt}(\varphi_1 \times \varphi_2)((\gamma_1, \gamma_2) + (\tilde{\gamma_1}, \tilde{\gamma_2}) )(0) \\ &= \frac{d}{dt}((\varphi_1 \times \varphi_2)(\gamma_1, \gamma_2))(0) + \frac{d}{dt}(\tilde{\gamma_1}, \tilde{\gamma_2})(0). \end{align*}

Can we know that a map is a submersion with using only its level sets

Posted: 26 May 2022 02:11 PM PDT

Let $M$ be a manifold of dimension $2n$, $f:M\rightarrow\mathbb{R}^n$ a smooth map.

We know that if $f$ is a submersion, then for any $c\in f(M)$, $f^{-1}(c)$ is a $n$-submanifold of $M$.

But what about the converse, if $f$ is surjective and if every level set of $f$ is a $n$-submanifold of $M$, does that imply that $f$ is a submersion?

Why am I getting two different solutions in this integral?

Posted: 26 May 2022 01:55 PM PDT

I was given the following integral:

$$ \int \frac {x^2} {(3+4x-4x^2)} $$

By completing the square and using trigonometric substitution we get this:

$ \sqrt{-(4x^2-4x+1)+4 }=\sqrt{ -(2x-1)^2+4}$
So,
$2x-1=2sin\theta$
$x=sin\theta+\frac12$
$dx=\frac 12(2sin\theta+1)$
and
$\sqrt{ -(2x-1)^2+4}=\sqrt{(2sin\theta)^2+4}=2cos\theta$

$$ \int \frac {(sin\theta + \frac 12)^2cos\theta} {8cos^3\theta}d\theta $$

$$ \int \frac {sin^2\theta + sin\theta + \frac 14} {8cos\theta}d\theta $$

$$ \int \frac {sin^2\theta + sin\theta + \frac 14} {8cos^2\theta}d\theta $$

$$ \frac 18 \int \frac {sin^2\theta} {cos^2\theta}d\theta + \frac 18 \int \frac{sin\theta} {cos^2\theta}d\theta + \frac 1{32} \int \frac {1} {cos^2\theta}d\theta $$

From here there are two options involving the $ \frac 18 \int \frac {sin^2\theta} {cos^2\theta} $

The first, probably the more simple method, is substituting for $ tan^2\theta $ and subsequently $ sec^2\theta -1 $. Which looks like so:

$$ \frac 18 \int sec^2\theta d\theta - \frac 18 \int d\theta + \frac 18 \int tan\theta sec\theta d\theta + \frac 1{32} \int sec^2\theta $$

$$ \frac 5{32} \int sec^2\theta - \frac 18 \int d\theta + \frac 18 \int tan\theta sec\theta d\theta $$

$$ \frac 5{32} tan\theta - \frac 18 \theta + \frac 18 sec\theta + C $$

Substituting back in to get our original x values:

$$ \frac {5(2x-1)} {32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1))+ \frac {2}{8\sqrt{-(2x-1)^2+4}} + C$$

$$ \frac {5(2x-1)} {32\sqrt{-(2x-1)^2+4}} + \frac {8}{32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

$$ \frac {10x-5} {32\sqrt{-(2x-1)^2+4}} + \frac {8}{32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

$$ \frac {10x+3} {32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

The other method I found was substituting $sin^2\theta$ for $1-cos^2\theta$ but I get a different solution.

$$ \frac 18 \int \frac {1-cos^2\theta} {cos^2\theta} + \frac 18 \int tan\theta sec\theta d\theta + \frac 1{32} \int sec^2\theta $$

$$ \frac 18 \int \frac {1} {cos^2\theta} d\theta - \frac 18 \int d\theta + \frac 18 \int tan\theta sec\theta d\theta \frac 1{32} \int sec^2\theta $$

$$ \frac 18 tan\theta - \frac 18 \theta + \frac 18 sec\theta + \frac 1{32}tan\theta + C $$

$$ \frac {2x-1} {8\sqrt{-(2x-1)^2+4}} + \frac {2}{8\sqrt{-(2x-1)^2+4}} + \frac {2x-1} {32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

$$ \frac {4(2x-1)} {32\sqrt{-(2x-1)^2+4}} + \frac {2}{8\sqrt{-(2x-1)^2+4}} + \frac {2x-1} {32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

$$ \frac {8x-4} {32\sqrt{-(2x-1)^2+4}} + \frac {2}{8\sqrt{-(2x-1)^2+4}} + \frac {2x-1} {32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

$$ \frac {10x-3} {32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

The solutions are nearly identical but the numerator in the first solution contains a +3 while the second contains a -3.

Note: I was given that the solution is:

$$ \frac {10x+3} {32\sqrt{-(2x-1)^2+4}} - \frac 18 sin^{-1}(\frac12(2x-1)) + C$$

Gauge equivalence of Lie-valued forms on the base space of a principal bundle

Posted: 26 May 2022 02:02 PM PDT

Given a principal $G$-bundle $P\xrightarrow{\pi} M$:

Assuming the bundle is globally trivial, we define two Lie$G$-valued 1-forms $A_1,A_2$ on $M$ to be gauge-equivalent if there is a principal bundle connection $\tilde{A}$ on $P$ and two global sections $s_j$ of $\pi$ such that $s_j^* \tilde{A} = A_j$, for $j=1,2$.

(The above seems to reflect the definitions used in physics, if I'm not mistaken.)

  1. Is every Lie$G$-valued 1-form on $M$ equal to $s^*\tilde{A}$ for some global section $s$ and connection $\tilde{A}$ on $P$?

  2. What would be the definition of gauge-equivalence when the bundle isn't trivial?

Solving a non-homogeneous recurrence sequence

Posted: 26 May 2022 01:52 PM PDT

enter image description here

please solve this i am getting alwaya an no plus in my iteration

$f(x)=x^{3}-3 x+a$. Given that it has $3$ integer roots $\alpha, \beta, \gamma$. Find all possible values of $a$.

Posted: 26 May 2022 02:15 PM PDT

So, I tried vieta and with some algebra, I got to the point $0=\alpha^{2}+\beta^{2}+\gamma^{2}+2(\alpha \gamma+\gamma \beta+\alpha \beta)$ where $(\alpha \gamma+\gamma \beta+\alpha \beta)=a$. I don't know how to progress from here

Problem on studying $\sum_{n\geq 1}\frac{n^{\log{n}}}{\sqrt{n!}}\frac{\tan{n}}{|\tan{n}|+n}$, comparison criterion

Posted: 26 May 2022 02:09 PM PDT

To study the convergence of the series $\sum_{n\geq 1}\frac{n^{\log{n}}}{\sqrt{n!}}\frac{\tan{n}}{|\tan{n}|+n}$ I have thought that: $$\frac{n^{\log{n}}}{\sqrt{n!}}\geq \frac{1}{\sqrt{n!}}\,\, \forall n\geq 1$$ So: $$a_n=\frac{n^{\log{n}}}{\sqrt{n!}}\frac{\tan{n}}{|\tan{n}|+n}\geq \frac{1}{\sqrt{n!}} \frac{\tan{n}}{|\tan{n}|+n}=b_n$$ Moreover $\frac{\tan{n}}{|\tan{n}|+n}\sim \frac{\tan{n}}{n}\implies b_n\sim \frac{1}{\sqrt{n!}}\frac{\tan{n}}{n}\geq \frac{1}{\sqrt{n!}}$.
Now $\sum \frac{1}{\sqrt{n!}}$ diverges but I can't say that also the original series diverges... Can you help me?

$\vec v_0,...,\vec v_{n-1}$ is a basis in a linear space. $A\vec v_i=\vec v_{(i-1)~mod~n}+\vec v_{(i+1)~mod~n}$. Find eigenvalues of $A$.

Posted: 26 May 2022 01:54 PM PDT

Suppose $\vec v_0,...,\vec v_{n-1}$ is a basis in linear space. $A$ - is a linear operator such that $A\vec v_i=\vec v_{(i-1)~mod~n}+\vec v_{(i+1)~mod~n}$. Find eigenvalues of $A$.


I assuming that in a case of $i=0$ that means: $A\vec v_0=\vec v_{n-1} + \vec v_1$.

In a case of n=3: $$ \begin{cases} A\vec v_0=\vec v_2 + \vec v_1\\ A\vec v_1=\vec v_0 + \vec v_2\\ A\vec v_2=\vec v_1 + \vec v_0\\ \end{cases} $$

We can notice that each $\vec v_i$ presented in exactly $2$ equations, so we can write:
$A\times(\vec v_0 + \vec v_1 + \vec v_2) =\vec v_2 + \vec v_1+\vec v_0 + \vec v_2+\vec v_1 + \vec v_0= 2 \times(\vec v_0 + \vec v_1 + \vec v_2)$
so the $2$ is an eigenvalue.

Similarly, we can notice that:
$A\times (2\vec v_0-\vec v_1 - \vec v_2)=2\vec v_2+2\vec v_1 -\vec v_0 -\vec v_2-\vec v_1 - \vec v_0=-1\times (2\vec v_0-\vec v_1 - \vec v_2)$
So the $-1$ is an eigenvalue.

I'm confused that we can make the last trick with $(-\vec v_0+2\vec v_1 - \vec v_2)$ and $(-\vec v_0-\vec v_1+2\vec v_2)$ getting the same $-1$ eigenvalue.
So in total we have $4$ eigenvectors for $n\times n$ matrix?

An algebra given by generators and relations and an algebra generated as a subalgebra by some elements. Are they isomorphic?

Posted: 26 May 2022 02:15 PM PDT

Let $k$ be a field and consider the following two $k$-algebras:

  1. $R_1 = k[a,b,c,d] / (ab - cd). $
  2. $R_2$ is the unital $k$-subalgebra of $k(t)[x,y]$ generated (as an algebra) by $x,y,tx,t^{-1}y$.

Are $R_1$ and $R_2$ isomorphic?

Clearly we have a surjective map $\varphi \colon k[a,b,c,d] \rightarrow R_2$ which maps $$a \mapsto x, b \mapsto y, c \mapsto tx, d \mapsto t^{-1}y$$ such that $(ab - cd) \subseteq \ker \varphi$ but I'm not sure if the kernel is strictly larger or not.

The algebra $R_1$ is generated by four elements $a,b,c,d$ where the "only" relation we impose is $ab = cd$. The algebra $R_2$ is also generated by four elements which satisfy the relation but a priori might satisfy "more" relations (which come from the elements being specific elements in $k(t)[x,y]$). For example, we have the relation $\left( tx + t^{-1}y \right)^2 = t^2 x^2 + 2xy + t^{-2}y^{-2}$. However, this relation written in terms of $a,b,c,d$ says that $(c+d)^2 = c^2 + 2ab + d^2$ which is already a consequence of the basic relation $ab = cd$ so this isn't a "new" relation.

Is the limit of this sequence Lipschitz-continuous?

Posted: 26 May 2022 02:09 PM PDT

Suppose I have sequences $(f_{i,n})_n$ for $i=0,1,..,m$ of $M$-Lipschitz functions from $\mathbb{R}$ to $\mathbb{R}$. Each of those uniformly converges to an $M$-Lipschitz function $f_i$. Now consider $F_n:\mathbb{R} \to \mathbb{R}^k$ defined as $F_n(x)=\sum\limits_{i=0}^mv_if_{i,n}$ where $v_i \in \mathbb{R}^k$ that converges uniformly to a function $F$.

Is $F$ Lipschitz-continuous?

Are the only quadrilaterals satisfying this symmetric relation rectangles?

Posted: 26 May 2022 02:15 PM PDT

$\newcommand{\S}{\mathbb{S}^1}$ $\newcommand{\la}{\lambda}$

While solving an optimization problem, I reached the following question:

Let $x_1,x_2,x_3,x_4 \in \S$ be four distinct points on the unit circle.

Suppose that there exist strictly positive real numbers $\la_{ij}=\la_{ji}, 1\le i\le j\le 4$ such that $$ \sum_{j \neq i} \la_{ij}x_j \in \text{span}\{x_i\} $$ for every $i \in \{1,2,3,4\}$.

Question: Are the $x_i$ the vertices of a rectangle?

It suffices to prove that $\sum_i x_i=0$.


A rectangle does satisfy the requirement, with $\la_{ij}=1$; Note that $x_2=-x_4, x_1=-x_3$ are antipodal.

Edit:

If we omit the symmetric condition $\la_{ij}=\la_{ji}$ and the positivity condition $\lambda_{ij}>0$, then any "non-degenerate" configuration satisfies this; if any three of the vertices are linearly independent, then we can choose the (not necessarily symmetric coefficients) that satisfy the requirement. But with the symmetric condition this is not so clear.

Integral of this weird function $ \int \frac{1}{(x^2+x+1)^2}dx $

Posted: 26 May 2022 02:06 PM PDT

I put this equation into Symbolab and it produced me a very complex result Basically this is the result but I believe there is a simpler way to solve this question

$$ \frac{2}{3\sqrt{3}}\left(2\arctan \left(\frac{2x+1}{\sqrt{3}}\right)+\sin \left(2\arctan \left(\frac{2x+1}{\sqrt{3}}\right)\right)\right)+C $$

This was my question $$ \int \frac{1}{\left(x^2+x+1\right)^2}dx $$

Conceptual confusion regarding raising and lowering index

Posted: 26 May 2022 01:55 PM PDT

In Pavel Grinfeld's tensor calculus course (eg: 18:51) , it is said that in a given coordinate system on $\mathbb{R}^3$, we can write:

$$ e^i = g^{i \alpha} e_{\alpha}$$

Suppose we evaluate the above in Cartesian coordinates, we have the metric tensor as $g^{i\alpha} = \delta^{i\alpha}$, this means:

$$e^i = e_i$$

But this seems paradoxical to me. The left side is something which lies in the co-tangent space/ dual space and the right is something which lives the vector space. So, how can we say they are equal to each other?

$h^{1,1}$ of blow-up of a complex surface along a point, Leray spectral sequence

Posted: 26 May 2022 01:58 PM PDT

Let $S$ be a smooth projective variety of dimension $2$ over $\mathbb{C}$, consider the blow-up $\tilde{S}$ of $S$ along one point $x\in S$. How can I show that $h^{1,1}(\tilde{S})=1+h^{1,1}(S)$ by Leray spectral sequence?

Existence of some functions

Posted: 26 May 2022 01:49 PM PDT

In my algebraic topology exam, the following question was asked as follows which I was not able to answer: $ f $ is a holomorphic function from a simply connected space $U$ to $\Bbb C^*.$ Then there exist (a)$log f $ from $U$ to $\Bbb C^*$. (b)$h$ from $U$ to $\Bbb C^*$ such that $h^2=f$. I am not able to think about the way I should proceed. It seems like that I have to use covering spaces and lifts but I am not able to find a solution.

Converse to Lebesgue Differentiation Theorem

Posted: 26 May 2022 02:03 PM PDT

Suppose we are given two finite positive Borel measures $\mu$, $\nu$ on $\mathbb{R}^n$ such that the function $$ f(x) := \limsup_{r\to 0} \frac{\mu(B(x,r))}{\nu(B(x,r))} $$ is in $L^1(\nu)$. Is it true that $\mu \ll \nu$? Note that in a sense, this would be a converse of the Lebesgue differentiation theorem, since if we started off assuming that $\mu \ll \nu$, then $\nu$-a.e., $f$ would be equal to the Radon-Nikodym derivative of $\mu$ with respect to $\nu$, which is in $L^1(\nu)$.

If it helps, I think I'm OK with also assuming that $\nu \ll \mu$, or even $\nu \leq \mu$.

For some context, this is coming up in my research constructing equilibrium states in dynamics. I tried a google search and I couldn't find very much, so any leads would be appreciated, even if not a full response. Thank you!

EDIT:

It seems the answer is yes if we assume $\nu \ll \mu$. To see why, note that, by Lebesgue differentiation, $$g(x) := \lim_{r\to 0}\frac{\nu(B(x,r))}{\mu(B(x,r))}$$ exists $\mu$-a.e., and is equal to $\frac{d\nu}{d\mu}$. Since $\nu\ll\mu$, the limit also exists $\nu$-a.e. Since $f$ is in $L^1(\nu)$, $f$ is finite $\nu$-a.e., and thus $f(x) = g^{-1}(x)$ for $\nu$-a.e. $x$. Thus $g^{-1} = f \in L^1(\nu)$, which implies $\mu \ll \nu$.

When is the polynomial $X^a+Y^b \in \mathbb{Q}[X,Y]$ irreducible?

Posted: 26 May 2022 02:16 PM PDT

I am studying the irreducibility of the polynomial $X^a+Y^b$.

I proved that if $X^a+Y^b \in \mathbb{C}[X,Y]$, the irreducibility is equivalent to that $a$ and $b$ is relatively prime. I also proved that $X^a+Y^b \in \mathbb{R}[X,Y]$, it is equivalent to that the ${\rm GCD}(a,b)$ is equal to $1$ or $2$.

I expect that when $X^a+Y^b \in \mathbb{Q}[X,Y]$, if and only if there is a non-negative integer $l$ such that ${\rm GCD}(a,b)=2^l$, the polynomial is irreducible. However, I don't have any ideas for the proof. Would you tell me any ideas or hints?

left coset of Kernel and Image relationship.

Posted: 26 May 2022 01:59 PM PDT

if we have Groups G, and H, and homomorphism F between them and left cosets g*Ker(F)

Why is it that "there is one left coset [g*Ker(F)] for each element of Im(F)"

(This is part of the answer to part (ii) of the attached question).

Image of full question Thanks!

Creating an integral to represent the volume of the intersection of two balls in cartesian coordinates

Posted: 26 May 2022 01:51 PM PDT

The question states:

Let $A$ be the intersection of the balls

$x^2+y^2+z^2\leq 9$ and $x^2+y^2+(z-8)^2\leq 49$

I am asked to just set up the iterated triple integral that represents the volume of $A$ in cartesian coordinates.

What I am able to determine so far is that for the equation

$x^2+y^2+z^2\leq 9 $:

$-\sqrt{9-x^2-y^2}\leq z \leq \sqrt{9-x^2-y^2}$

$-\sqrt{9-x^2}\leq y \leq \sqrt{9-x^2}$

$-3\leq x \leq 3$

If I were to set up this integral it would be:

$\int_{-3}^{3}\int_{-\sqrt{9-x^2}}^{\sqrt{9-x^2}} \int_{-\sqrt{9-x^2-y^2}}^{\sqrt{9-x^2-y^2}} dzdydx$

But I don't know how I'm supposed to set up the intersection of the two balls?

I was thinking of setting the two equations equal to each other so that

$x^2+y^2\leq 9-z^2$ and

$x^2+y^2\leq 49-(z-8)^2$

so then I have $9-z^2=49-(z-8)^2$

Solving for $z$ I get $z=3/2$ but I don't know what to do with this information.

What is the intuition behind conditional expectation in a measure-theoretic treatment of probability?

Posted: 26 May 2022 02:00 PM PDT

What is the intuition behind conditional expectation in a measure-theoretic sense, as opposed to a non-measure-theoretic treatment?

You may assume I know:

  • what a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ refers to
  • probability without measure theory really well (i.e., discrete and continuous random variables)
  • what the measure-theoretic definition of a random variable is
  • that a Lebesgue integral has something to do with linear combinations of step functions, and intuitively, it involves partitioning the $y$-axis (as opposed to the Riemann integral, which partitions the $x$-axis).

I haven't had time to learn measure-theoretic probability lately due to graduate school creeping in as well as other commitments, and conditional expectation is often covered as one of the last topics in every measure-theoretic probability text I've seen.

I have seen notations such as $\mathbb{E}[X \mid \mathcal{F}]$, where I assume $\mathcal{F}$ is some sort of $\sigma$-algebra - but of course, this looks very different from, say, $\mathbb{E}[X \mid Y]$ from what I saw in my non-measure-theoretic treatment of probability, where $X$ and $Y$ are random variables.

I was also surprised to see that one book I have (Essentials of Probability Theory for Statisticians by Proschan and Shaw (2016)), if I recall correctly, explicity states that conditional expectation is defined as a conditional expectation, rather than the conditional expectation, which implies to me that there's more than one possible conditional expectation when given two pairs of random variables. (Unfortunately, I don't have the book on me right now, but I can update this post later).

The Wikipedia article is quite dense, and I see words such as "Radon-Nikodym" which I haven't learned yet, but I would at least like to get an idea of what the intuition of conditional expectation is in a measure-theoretic sense.

Numerics for a game theory calculation using expected utility

Posted: 26 May 2022 01:52 PM PDT

I am trying to replicate Bruce B. de Mesquita's (BDM) results on political game theory for prediction. Based on where actors stand on issues, their capabilities, salience, BDM's method attempts to find the eventual decision point by simulating a game. He reportedly used this method with much success; and published his results in successive journals, the latest of which is (1). This is his so-called "expected utility method", there is a newer method (3) but there is less documentation on that, so I wanted to use EU model first.

Scholz et.al tried to replicate the findings and documented his work here (2). I took his work as basis, since a lot of BDM articles / books are behind paywalls. There are also the gentleman here (4), they took Scholz's work as the basis, added a machine learning method on top, and created a new product.

I wrote the code, however I am not sure I was successful at replicating results.

import pandas as pd  import numpy as np  import itertools    Q = 1.0 ; T = 1.0    class Game:        def __init__(self,df):          self.df = df          self.df_capability = df.Capability.to_dict()              self.df_position = df.Position.to_dict()              self.df_salience = df.Salience.to_dict()              self.max_pos = df.Position.max()          self.min_pos = df.Position.min()        def weighted_median(self):          self.df['w'] = self.df.Capability*self.df.Salience           self.df['w'] = self.df['w'] / self.df['w'].sum()          self.df['w'] = self.df['w'].cumsum()          return float(self.df[self.df['w']>=0.5].head(1).Position)        def mean(self):          return (self.df.Capability*self.df.Position*self.df.Salience).sum() / \                 (self.df.Capability*self.df.Salience).sum()        def Usi_i(self,i,j,ri=1.):          tmp1 = self.df_position[i]-self.df_position[j]          tmp2 = self.max_pos-self.min_pos          return 2. - 4.0 * ( (0.5-0.5*np.abs(float(tmp1)/tmp2) )**ri)        def Ufi_i(self,i,j,ri=1.):          tmp1 = self.df_position[i]-self.df_position[j]          tmp2 = self.df.Position.max()-self.df.Position.min()          return 2. - 4.0 * ( (0.5+0.5*np.abs(float(tmp1)/tmp2) )**ri )        def Usq_i(self,i,ri=1.):          return 2.-(4.*(0.5**ri))        def Ui_ij(self,i,j):          tmp1 = self.df_position[i] - self.df_position[j]          tmp2 = self.max_pos-self.min_pos          return 1. - 2.*np.abs(float(tmp1) / tmp2)         def v(self,i,j,k):          return self.df_capability[i]*self.df_salience[i]*(self.Ui_ij(i,j)-self.Ui_ij(i,k))         def Pi(self,i):          l = np.array([[i,j,k] for (j,k) in itertools.combinations(range(len(self.df)), 2 ) if i!=j and i!=k])          U_filter = np.array(map(lambda (i,j,k): self.Ui_ij(j,i)>self.Ui_ij(i,k), l))          lpos = l[U_filter]          tmp1 = np.sum(map(lambda (i,j,k): self.v(j,i,k), lpos))          tmp2 = np.sum(map(lambda (i,j,k): self.v(j,i,k), l))          return float(tmp1)/tmp2        def Ubi_i(self,i,j,ri=1):          tmp1 = np.abs(self.df_position[i] - self.weighted_median()) + \                 np.abs(self.df_position[i] - self.df_position[j])          tmp2 = np.abs(self.max_pos-self.min_pos)          return 2. - (4. * (0.5 - (0.25 * float(tmp1) / tmp2))**ri)        def Uwi_i(self,i,j,ri=1):          tmp1 = np.abs(self.df_position[i] - self.weighted_median()) + \                 np.abs(self.df_position[i] - self.df_position[j])          tmp2 = np.abs(self.max_pos-self.min_pos)          return 2. - (4. * (0.5 + (0.25 * float(tmp1) / tmp2))**ri)        def EU_i(self,i,j,r=1):          term1 = self.df_salience[j] * \                  ( self.Pi(i)*self.Usi_i(i,j,r) + ( 1.-self.Pi(i) )*self.Ufi_i(i,j,r) )          term2 = (1-self.df_salience[j])*self.Usi_i(i,j,r)          #term3 = -self.Qij(j,i)*self.Usq_i(i,r)          #term4 = -(1.-self.Qij(j,i))*( T*self.Ubi_i(i,j,r) + (1.-T)*self.Uwi_i(i,j,r) )          term3 = -Q*self.Usq_i(i,r)          term4 = -(1.-Q)*( T*self.Ubi_i(i,j,r) + (1.-T)*self.Uwi_i(i,j,r) )          return term1+term2+term3+term4        d ef EU_j(self,i,j,r=1):          return self.EU_i(j,i,r)        def Ri(self,i):          # get all j's except i          l = [x for x in range(len(self.df)) if x!= i]          tmp = np.array(map(lambda x: self.EU_j(i,x), l))          numterm1 = 2*np.sum(tmp)          numterm2 = (len(self.df)-1)*np.max(tmp)          numterm3 = (len(self.df)-1)*np.min(tmp)          return float(numterm1-numterm2-numterm3) / (numterm2-numterm3)        def ri(self,i):          Ri_tmp = self.Ri(i)          return (1-Ri_tmp/3.) / (1+Ri_tmp/3.)        def Qij(self,i,j):          l = np.array([k for k in range(len(self.df))])          res = map(lambda x: self.Pi(k)+(1-self.df_salience[k]),l)          return np.product(res)        def do_round(self,df):          self.df = df; df_new = self.df.copy()                  # reinit          self.df_capability = self.df.Capability.to_dict()              self.df_position = self.df.Position.to_dict()              self.df_salience = self.df.Salience.to_dict()              self.max_pos = self.df.Position.max()          self.min_pos = self.df.Position.min()            offers = [list() for i in range(len(self.df))]          ris = [self.ri(i) for i in range(len(self.df))]          for (i,j) in itertools.combinations(range(len(self.df)), 2 ):              eui = self.EU_i(i,j,r=ris[i])              euj = self.EU_j(i,j,r=ris[j])              if eui > 0 and euj > 0:                  # conflict                  mid_step = (self.df_position[i]-self.df_position[j])/2.                  print i,j,eui,euj,'conflict, both step', mid_step, -mid_step                  offers[j].append(mid_step)                  offers[i].append(-mid_step)              elif eui > 0 and euj < 0 and np.abs(eui) > np.abs(euj):                  # compromise - actor i has the upper hand                  print i,j,eui,euj,'compromise', i, 'upper hand'                  xhat = (self.df_position[i]-self.df_position[j]) * np.abs(euj/eui)                  offers[j].append(xhat)              elif eui < 0 and euj > 0 and np.abs(eui) < np.abs(euj):                  # compromise - actor j has the upper hand                  print i,j,eui,euj,'compromise', j, 'upper hand'                  xhat = (self.df_position[j]-self.df_position[i]) * np.abs(eui/euj)                  offers[i].append(xhat)              elif eui > 0 and euj < 0 and np.abs(eui) < np.abs(euj):                  # capinulation - actor i has upper hand                  j_moves = self.df_position[i]-self.df_position[j]                  print i,j,eui,euj,'capitulate', i, 'wins', j, 'moves',j_moves                  offers[j].append(j_moves)              elif eui < 0 and euj > 0 and np.abs(eui) > np.abs(euj):                  # capitulation - actor j has upper hand                  i_moves = self.df_position[j]-self.df_position[i]                  print i,j,eui,euj,'capitulate', j, 'wins', i, 'moves',i_moves                  offers[i].append(i_moves)              else:                  print i,j,eui,euj,'nothing'            print offers          df_new['offer'] = map(lambda x: 0 if len(x)==0 else x[np.argmin(np.abs(x))],offers)          df_new.loc[:,'Position'] = df_new.Position + df_new.offer          df_new.loc[df_new['Position']>self.max_pos,'Position'] = self.max_pos          df_new.loc[df_new['Position']<self.min_pos,'Position'] = self.min_pos          return df_new  

To run, there is run.py:

import pandas as pd, sys  import numpy as np, matplotlib.pylab as plt  import scholz, itertools    if len(sys.argv) < 3:      print "\nUsage: run.py [CSV] [ROUNDS]"      exit()    df = pd.read_csv(sys.argv[1]); print df  df.Position = df.Position.astype(float)  df.Capability = df.Capability.astype(float)  df.Salience = df.Salience/100.    game = scholz.Game(df)    results = pd.DataFrame(index=df.index)  for i in range(int(sys.argv[2])):      results[i] = df.Position      df = game.do_round(df)      print df      print 'weighted_median', game.weighted_median(), 'mean', game.mean()    results =  results.T  results.columns = df.Actor  print results  results.plot()  plt.savefig('out-%s.png' % sys.argv[1])  

I ran this code on EU emission agreement, Iran presidential election data from (4), on the British EMU data from (5) (for Labor party case), and two small synthetic datasets I created.

Actor,Capability,Position,Salience  Netherlands,8,40,80  Belgium,8,70,40  Luxembourg,3,40,20  Germany,16,40,80  France,16,100,60  Italy,16,100,60  UK,16,100,90  Ireland,5,70,10  Denmark,5,40,100  Greece,8,70,70    Actor,Capability,Position,Salience  Jalili,24,10,70  Haddad,8,20,100  Gharazi,1,40,100  Rezayi,20,40,60  Ghalibaf,64,50,100  Velayati,7,50,25  Ruhani,21,80,100  Aref,30,100,70    Actor,Capability,Position,Salience  Labor Pro EMU,100,75,40  Labor Eurosceptic,50,35,40  The Bank of England,10,50,60  Technocrats,10,95,40  British Industry,10,50,40  Institute of Directors,10,40,40  Financial Investors,10,85,60  Conservative Eurosceptics,30,5,95  Conservative Europhiles,30,60,50    Actor,Capability,Position,Salience  A,100,100,100  B,100,90,100  C,50,50,50  D,5,5,10  E,10,10,20    Actor,Capability,Position,Salience  A,100,5,100  B,100,10,100  C,50,50,50  D,5,100,10  E,10,90,20  

For EU emission (4) reports the result should have been around 8, I get 6.5. For Iran outcome is around 60, favoring reformers but this is far cry from Preana's and BDMs findings which is around 80. For EMU data, authors report anti-euro finding near 4, my finding is around 60.

The synthetic dataset is fine, always coalescing near top and bottom, but this is a simple case. I am attaching the graph outputs below as well.

Output plot 1

Output plot 2

Output plot 3

Output plot 4

Output plot 5


  1. Bueno De Mesquita BB (1994) Political forecasting: an expected utility method. In: Stockman F (ed.) European Community Decision Making. Yale, CT: Yale University Press, Chapter 4, 71–104.
  2. https://oficiodesociologo.files.wordpress.com/2012/03/scholz-et-all-unravelling-bueno-de-mesquita-s-group-decision-model.pdf
  3. A New Model for Predicting Policy Choices: Preliminary Tests http://irworkshop.sites.yale.edu/sites/default/files/BdM_A%20New%20Model%20for%20Predicting%20Policy%20ChoicesREvised.pdf
  4. http://www.scirp.org/journal/PaperDownload.aspx?paperID=49058
  5. The Predictability of Foreign Policies, The British EMU Policy, https://www.rug.nl/research/portal/files/3198774/13854.pdf
  6. J. Velev, Python Code, https://github.com/jmckib/bdm-scholz-expected-utility-model.git

The distribution of sample proportion for given population proportion and sample size

Posted: 26 May 2022 02:06 PM PDT

If the population proportion is 0.90 and a sample of size 64 is taken, what is the probability that the sample proportion is more than 0.89? (4dp)

work: $n=64$, $\hat p=0.89$, so $X=n \hat p =56.96$. $$ P(\hat p >0.89)=P( X >56.95)=1- P( X <56.95)$$ and then how to do it?

The answer is $0.6064$

More details of the solutions would be great, thanks

No comments:

Post a Comment