Recent Questions - Mathematics Stack Exchange |
- Volume of tetrahedron / dodecahedron in $m$ dimension
- The isomorphism of $ SO_4(\mathbb{R}) $ with $ SU_2 \otimes SU_2 $
- What are the steps to calculate this? (Fundamental theorem of calculus)
- Inequality involving expectation and L^p norm
- Extremal of the functional J[u]
- An uncountable product of $\mathbb{R}$ with itself is not metrizable (product topology).
- How to solve for x = (2a/b) = b/(a/2)
- Are there infinitely many square numbers with increasing digits? [duplicate]
- If an event with probability p occurs 2 out of 3 times, how can we find the probability that p > 0.5?
- Necessary condition for Open map in Inverse function theorem
- Minimum Length of Circumradius $R$
- find functors in the Rel category
- Showing that the operator $T$ is a bounded linear operator mapping $L^1[0, 1]$ to $c_0$
- Solve $\sin(x)\cos(x)=\sin(x)+\cos(x)$
- Why is the third derivative of cumulant generating function = skewness?
- Statement on Lie Group and Lie Algebra Homomorphisms
- Is my LU Factorization Incorrect?
- Calculate the inverse Laplace transform by convolution
- The value of $y=f(x)$ at $a$
- Let $X \subset \Bbb R^2$ be a union of the coordinate axes and the line $x+y=1$, $0\le x \le1$. Show that $X$ is homotopy equivalent to $\Bbb S^1$.
- Every metric space is paracompact (an elegant proof)
- Are these generalizations known in the literature?
- Straight line passes through a circle
- How to differentiate inverse of a matrix inside frobenius norm?
- Volume of objects like hypercube / hypersphere : $V_{n}^{(m)}(r) = \dots$
- Positive integer solutions to product
- How to calculate start radius, end radius and angle for clothoid segments?
- List of pseudo-Catalan solids
- How to differentiate a tensor Frobenius norm?
- Finding inverse probability density function.
Volume of tetrahedron / dodecahedron in $m$ dimension Posted: 06 Mar 2022 07:45 AM PST What is the formula for calculating volume of tetrahedron in $m$ dimension ? I know equations only for $\text{2D}$ and $\text{3D}$ cases. Tetrahedron:For $\text{2D}$ it is: $$ V(r) = \frac{3 \sqrt{3}}{4} * r^2 $$ For $\text{3D}$ it is: Dodecahedron :For $\text{2D}$ it is: $$ V(r) = \frac{5}{4} \sqrt{ \frac{1}{2} (5 + \sqrt{5}) } * r^2 $$ For $\text{3D}$ it is: I was looking fo the fromula for at least few hours without any effects. |
The isomorphism of $ SO_4(\mathbb{R}) $ with $ SU_2 \otimes SU_2 $ Posted: 06 Mar 2022 07:42 AM PST Let $ A,B $ be matrix groups (with entries in the same field). Then the tensor/Kronecker product $ A \otimes B $ is a matrix group and $$ \pi: A \times B \to A \otimes B $$ is a group homomorphism. Taking $ A=B=SU_2 $ we have a map $ \pi: SU_2 \times SU_2 \to SU_4 $ given by $$ (A,B) \mapsto A \otimes B $$ The only nontrivial element of the kernel is $ (-1,-1) $. So the image $ SU_2 \otimes SU_2 $ of $ \pi $ is a subgroup of $ SU_4 $ isomorphic to $ SO_4(\mathbb{R}) $. Is $ SU_2 \otimes SU_2 $ conjugate to $ SO_4(\mathbb{R}) $ in $ SU_4 $? Since matrices in $ SU_2 $ have real trace then all matrices in $ SU_2 \otimes SU_2 $ have real trace. So it is at least plausible that $ SU_2 \otimes SU_2 $ and $ SO_4(\mathbb{R}) $ are conjugate. |
What are the steps to calculate this? (Fundamental theorem of calculus) Posted: 06 Mar 2022 07:35 AM PST "If $F(x)=\cos(-2x-4)\,\cdot\,2^{(10-x)/5}\, $ and $f(x)=F'(x)\, $ what is the value of $\int_{-4}^{2}f(x)\,dx\,. $" The answer was given as $4.1$ but how did they get to this? I calculated the derivative of the $F(x)$ and got $2^{2x+1}(sin(-2x-4)+log(2)cos(-2x-4))$ and then when I plugged that into $\int_{-4}^{2}f(x)\,dx\, $ and ended up getting $-2.325$ How did they calculate the answer to be $4.1$? |
Inequality involving expectation and L^p norm Posted: 06 Mar 2022 07:48 AM PST We denote by $\left\|. \right\|_{p}$ the usual $L^p$ norm. Let :
According to this article (p.13) we have, for $h'\leq h$ : $$ \mathbb{E}\left [ \left ( \int \left ( K_{h'}-K_{h} \right )\left ( x-X \right )t\left ( x \right )\mathrm{d}x \right )^2 \right ] \leq \mathbb{E}\left [ \left ( \int |K_{h'}-K_{h}|\left ( x-X \right )\mathrm{d}x \right ) \right ] \mathbb{E}\left [ \left ( \int |K_{h'}-K_{h}|\left ( x-X \right )t^2\left ( x \right )\mathrm{d}x \right ) \right ] \leq \left\| K_{h'}-K_{h} \right\|_{1}^2\left\| f \right\|_{\infty}\left\| t \right\|_{2}^2 $$ I do not understand how to obtain that inequality. I tried to apply Cauchy-Schwartz's inequality $\left\| gh \right\|_{1} \leq \left\| g\right\|_{2}\left\| h \right\|_{2}$ for $g(x)=\sqrt{|K_{h'}-K_{h}|\left ( x-X \right )}$ and $h(x)=g(x)|t|(x)$ but everything still remains inside a singular expectation when I do that. I also tried to apply Cauchy-Schwartz with the scalar product $ \left< u\left ( X \right ),v\left ( X \right ) \right>= \mathbb{E}\left [ \int u\left ( X \right )v\left ( X \right )\right ]$ but then the square is outside of the expectation in the left hand side of the inequality. The second inequality is also a bit unclear to me, I don't understand how to bound the second expectation. |
Extremal of the functional J[u] Posted: 06 Mar 2022 07:31 AM PST Let $J[u]=\int_0^1({y''}^2-16x^2u')\text{dx}-u^2(1), \quad G=\{u'(1)=2,\; 2u'(0)+u(0)=1\}$ $\delta J[u](h)=\int_0^1(2u^{\text{IV}}+32x)h\text{dx}+2u''h'|_1-2u''h'|_0+(2u'''+16x^2)h|_0-(2u'''+16x^2+2u)h|_1$ $u^{\text{IV}}+16x=0$ $u_0=C_1+C_2x+C_3x^2+C_4x^3-\frac{2}{15}x^5$ How can I find constants $C_1, C_2, C_3, C_4$ satisfying given area G? |
An uncountable product of $\mathbb{R}$ with itself is not metrizable (product topology). Posted: 06 Mar 2022 07:43 AM PST
I wrote what I believe is an incorrect proof of this statement since I feel like I ignored the fact that the product is uncountable. However I can't find where is my mistake. Cannot this proof "work" for countably infinite product of $\mathbb{R}$ ? (But aren't these products of $\mathbb{R}$ metrizable with the product topology ?). Proof: Let $J$ be an uncountable index set. Let $(\mathbb{R}^J,\tau)$ be a topological space under the product topology. Let $\mathcal{N}(x)$ denote the set of every neighbourhood of $x$ in $\mathbb{R}^J$. Let $A\subset\mathbb{R}^J$ such that $\forall x \in A$, $x_\alpha = 1\text{ except for finitely many }\alpha\in J$, We want tot show that $0\in\overline{A}$ ? Let $U\in\mathcal{N}(0)\subset\mathbb{R}$. \begin{gather} U\in\tau \Rightarrow U=\prod_{\alpha\in J}{U_\alpha}\text{ with finitely many } U_\alpha \neq \mathbb{R} \end{gather} Let $\Gamma = \left\lbrace \beta\in J \vert U_\beta \neq \mathbb{R} \right\rbrace$ Then for $\beta\in\Gamma$ we know $U_\beta$ is open in $\mathbb{R}$ and $0\in U_\beta$. So, \begin{gather} U_\beta = ]a_\beta; b_\beta[\text{ with }a_\beta < 0 < b_\beta \end{gather} Let $y\in\mathbb{R}^J$ s.t. for $\alpha\in J$, \begin{gather} y_\alpha = \frac{b_\alpha - a_\alpha}{2}\text{, }\alpha\in\Gamma\\ y_\alpha = 1\text{, }\alpha\notin\Gamma \end{gather} Then $y\in A$ therefore $\exists y\in A$ s.t. $y\in U \cap A$. Since $U\in\mathcal{N}(0)$ we have $0\in A' \subset \overline{A}$. Now we must show that all sequences $(x_n)$ of $A$ do not converge to $0$. Suppose that there is a sequence $(x_n)$ of $A$ s.t. $x_n \rightarrow 0$ \begin{gather} \forall U\in\mathcal{N}(0):\exists N\in\mathbb{N}:n>N\Rightarrow x_n\in U \end{gather} We note $x_{n,\alpha}$ the $\alpha$th coordinate of $x_n$. Let $D_n=\lbrace i\in J \vert x_{n,i}\neq 1\rbrace$ (obviously $D_n$ is finite $\forall n\in\mathbb{N}$). Then, let \begin{gather} \Gamma_n = \prod_{\alpha\in J}U_\alpha \end{gather} such that \begin{gather} U_\alpha = ]-|x_{n,\alpha}|;|x_{n,\alpha}|[\text{, }\alpha\in D_n\\ U_\alpha = \mathbb{R}\text{, }\alpha\notin\ D_n \end{gather} Then $\forall n\in\mathbb{N}$ we have $\Gamma_n\in\mathcal{N}(0)$ and $x_n\notin\Gamma_n$. So $x_n \nrightarrow 0$, which further imply that every sequence $(x_n)$ of $A$ cannot converge to $0$. Therefore $\mathbb{R}^J$ isn't metrizable. |
How to solve for x = (2a/b) = b/(a/2) Posted: 06 Mar 2022 07:29 AM PST x = 2a/b = b/(a/2) I am pretty lost in regards to how to approach this problem. If x is equal to 2a/b and 2a/b is equal to b/(a/2) then do I need to somehow get b/(a/2) into the form of 2a/b in order to solve? |
Are there infinitely many square numbers with increasing digits? [duplicate] Posted: 06 Mar 2022 07:30 AM PST This is a question that came up while joking around with my friends, but now I am really intrigued by this question. For sake of brevity, let's call square numbers with monotone increasing digits peculiar squares. Some examples of peculiar squares are $13^2 = 169$ (since $1 \leq 6 \leq 9$) and $15^2 = 225$. Question is, are there infintely many peculiar squares? To tackle this question, I came up with a more generalized conjecture:
I first tried solving for $n=2$. This was pretty easy, since it is equivalent to proving that there are only finitely many squares of form $11\cdots 1_{(2)}$. Then I tried solving for $n=3$. Simple number theory shows that $11\cdots 122 \cdots 2_{(3)}$ cannot be a square number. Thus, we only need to show that there are finitely many squares of form $11\cdots 1_{(3)}$. This was much easier said than done, and in the end I had to borrow the power of StackExchange. (Integer solutions of $3^n-1=2m^2$) So up to this point, I know that the Peculiar Square Conjecture holds for $n = 2$ and $n = 3$, but I don't have clear idea of how to prove it for $n = 4$ or beyond. Any help or ideas would be much appreciated. |
Posted: 06 Mar 2022 07:21 AM PST If an event with probability "p" occurs 2 out of 3 times, how can we find the probability that p > 0.5? |
Necessary condition for Open map in Inverse function theorem Posted: 06 Mar 2022 07:34 AM PST $f:\mathbb{R}^3 \to \mathbb{R}^3$ , $$f(x_1,x_2,x_3)=(e^{x_2cosx_1},e^{x_2sinx_1},2x_1-cosx_3)$$ and $E=(x_1,x_2,x_3)$ such that there exist an open subset U around $$(x_1,x_2,x_3)$$ such that f restricted to U is an open map. Then E is? My approach: Function is continuously differentiable so I have used inverse function theorem and find the point in which Jacobian is non zero. The point were $R^3/(x_1,x_2,nπ)$. So at the point there will exist a neighborhood such that, f will be an open map at that neighborhood. Is there any other point namely on $(x_1,x_2,nπ)$. Is there any result guarantee such that Jacobian is zero at a point p and it will/will not have a neighborhood for which it's an open map? Suggest me some book for learning this topic, Inverse function theorem and thanks in advance. |
Minimum Length of Circumradius $R$ Posted: 06 Mar 2022 07:43 AM PST With usual notation, if in a triangle $\Delta ABC$, $a = 3$, $b = 4$ and circumradius R of is minimum, then the value of $[2rR]$ is (where [.] denotes greatest integer function) My approach is as follow $R = \frac{{abc}}{{4\Delta }} \Rightarrow R = \frac{{abc}}{{4rs}} \Rightarrow 2Rr = \frac{{abc}}{{2s}}$ $2s = a + b + c \Rightarrow 2s - 7 = c$ $ \Rightarrow 2Rr = \frac{{12c}}{{c + 7}} \Rightarrow 2Rr = \frac{{12}}{{1 + \frac{7}{c}}}$ Not able to proceed |
find functors in the Rel category Posted: 06 Mar 2022 07:34 AM PST I'm learning Category Theory by Steve Awodey's book. in first exercise we have :
(a) Show that Rel is a category. (b) Show also that there is a functor $G$ : Sets $\rightarrow$ Rel taking objects to themselves and each function $f: A \rightarrow B$ to its graph, $$ G(f)=\{\langle a, f(a)\rangle \in A \times B \mid a \in A\} . $$ (c) Finally, show that there is a functor $C:$ Rel $^{\circ p} \rightarrow$ Rel taking each relation $R \subseteq A \times B$ to its converse $R^{c} \subseteq B \times A$, where, $$ \langle a, b\rangle \in R^{c} \Leftrightarrow\langle b, a\rangle \in R . $$
(a) Identity arrows behave correctly, for if $f \subset A \times B$, then $$ \begin{aligned} f \circ 1_{A} &=\left\{\langle a, b\rangle \mid \exists a^{\prime} \in A:\left\langle a, a^{\prime}\right\rangle \in 1_{A} \wedge\left\langle a^{\prime}, b\right\rangle \in f\right\} \\ &=\left\{\langle a, b\rangle \mid \exists a^{\prime} \in A: a=a^{\prime} \wedge\left\langle a^{\prime}, b\right\rangle \in f\right\} \\ &=\{\langle a, b\rangle \mid\langle a, b\rangle \in f\}=f \end{aligned} $$ and symmetrically $1_{B} \circ f=f$. Composition is associative; if $f \subseteq A \times B$, $g \subseteq B \times C$, and $h \subseteq C \times D$, then $(h \circ g) \circ f=\{\langle a, d\rangle \mid \exists b:\langle a, b\rangle \in f \wedge\langle b, d\rangle \in h \circ g\}$ $$ \begin{aligned} &=\{\langle a, d\rangle \mid \exists b:\langle a, b\rangle \in f \wedge\langle b, d\rangle \in\{\langle b, d\rangle \mid \exists c:\langle b, c\rangle \in g \wedge\langle c, d\rangle \in h\}\} \\ &=\{\langle a, d\rangle \mid \exists b:\langle a, b\rangle \in f \wedge \exists c:\langle b, c\rangle \in g \wedge\langle c, d\rangle \in h\} \\ &=\{\langle a, d\rangle \mid \exists b \exists c:\langle a, b\rangle \in f \wedge\langle b, c\rangle \in g \wedge\langle c, d\rangle \in h\} \\ &=\{\langle a, d\rangle \mid \exists c:(\exists b:\langle a, b\rangle \in f \wedge\langle b, c\rangle \in g) \wedge\langle c, d\rangle \in h\} \\ &=\{\langle a, d\rangle \mid \exists c:\langle a, b\rangle \in g \circ f \wedge\langle c, d\rangle \in h\} \\ &=h \circ(g \circ f) . \end{aligned} $$
|
Showing that the operator $T$ is a bounded linear operator mapping $L^1[0, 1]$ to $c_0$ Posted: 06 Mar 2022 07:49 AM PST For $f \in L^1[0,1]$ let $x_n = \int_0^1 f(t) t^n \: dt$, and let $T(f) = \{ x_n \}$. I want to show that the operator $T$ is a bounded linear operator mapping $L^1[0, 1]$ to $c_0$ (the space of real convergent sequences that converge to $0$) and determine the norm of $T$. I have tried to go about this in a direct manner but haven't had any luck. For a bounded linear operator I am using the standard definition: Let $X$ and $Y$ be normed linear spaces. A linear operator $T: X \to Y$ is bounded if there exists an $M \geq 0$ such that $||T u|| \leq M ||u||$ for all $u \in X$. Lastly it is worth noting that I am considering the standard norm of $c_0$. That is to say $$||(x_1, x_2, x_3, \cdots)|| = \sup \{ |x_n| : n \in \mathbb{N} \}.$$ |
Solve $\sin(x)\cos(x)=\sin(x)+\cos(x)$ Posted: 06 Mar 2022 07:23 AM PST My initial idea was $$(\sin(x)\cos(x))^2=1+2\sin(x)\cos(x)$$ Let $t=\sin(x)\cos(x)$; $$t^2=1+2t \quad\Leftrightarrow\quad t=1-\sqrt2$$ (since $1+\sqrt2>1$). I.e. $$\sin(x)\cos(x)=1-\sqrt2 \quad\Leftrightarrow\quad \tfrac12\sin(2x)=1-\sqrt2 \quad\Leftrightarrow\quad x=\tfrac{1}{2}\arcsin(2(1-\sqrt2))$$ but I didn't get any 'elegant' final solution. Any better ideas? |
Why is the third derivative of cumulant generating function = skewness? Posted: 06 Mar 2022 07:25 AM PST From what I know, for random variable $X$, skewness is defined as $$\mathbb{E}\left(\frac{X-\mathbb{E}(X)}{\sigma}\right)^3$$ or $$\frac{\mathbb{\mathbb{E}}(X^3)-3\mathbb{E}(X)\sigma-\sigma^3}{\sigma^3}$$ where $\sigma$ is the standard deviation of $X$. and the third derivative of cumulant generating function is $$\mathbb{E}(X^3)-3\mathbb{E}(X)\mathbb{E}(X^2)+2[\mathbb{E}(X)]^3$$ These $2$ formulae look totally different, but why the third derivative of cumulant generating function is defined as the skewness of $X$? |
Statement on Lie Group and Lie Algebra Homomorphisms Posted: 06 Mar 2022 07:32 AM PST I am reading J.J.Duistermaat and J.A.C.Kolk's Lie Groups. I cannot figure out (1.8.6). The textbook said and I quote: Because $ad$ is a Lie algebra homomorphism: $\mathfrak g\to\mathfrak{gl(g)}$, one has $e^{ad(ad\,X)}\circ ad\,Y=ad(e^{ad\,X}\,Y)$. I know that a Lie algebra homomorphism is a function that preserve the Lie bracket, but I cannot figure out how to prove the statement (1.8.6). |
Is my LU Factorization Incorrect? Posted: 06 Mar 2022 07:25 AM PST I have the following matrix I am trying to decompose into it's respective $L$ and $U$ parts for $A = LU$. So I have $$\begin{bmatrix} 1 && 4 && 3 \\ 0 && -10 && -5 \\ 0 && -8 && -4 \end{bmatrix}$$ $$ \left[ \begin{array}{cccccc|cccccc} 1 && 4 && 3 && && 1 && 0 && 1\\ 0 && -10 && -5 && && 0 && 1 && 0\\ 0 && -8 && -4 && && 0 && 0 && 1 \end{array} \right] $$ I notice that column $1$ is fine, and I need to make $R_3 C_2 = 0$ to achieve the upper triangle matrix on the left and the lower triangle matrix on the right. So I do: $R_3 \longrightarrow -\frac{5}{4} R_3 + R_2$ $$ \left[ \begin{array}{cccccc|cccccc} 1 && 4 && 3 && && 1 && 0 && 1\\ 0 && -10 && -5 && && 0 && 1 && 0\\ 0 && 0 && 0 && && 0 && 1 && -5/4 \end{array} \right] $$ However, the answer key is telling me the answer is: $$ \left[ \begin{array}{cccccc|cccccc} 1 && 4 && 3 && && 1 && 0 && 1\\ 0 && -10 && -5 && && 0 && 1 && 0\\ 0 && -8 && -4 && && 0 && 4/5 && 1 \end{array} \right] $$ |
Calculate the inverse Laplace transform by convolution Posted: 06 Mar 2022 07:25 AM PST Consider the following problem: determine $\displaystyle\mathscr{L}^{-1}\bigg[\frac{s^2+1}{s^2(s^2-4s+9)}\bigg]$ using formulas and using convolutions. Using the formulas I found that the solution would be: $\displaystyle\frac{1}{81}\cdot\Big[9t+16\sqrt{5}e^{2t}sin(\sqrt{5}t)-4e^{2t}cos(\sqrt{5}t)+4\Big].$ But using convolutions we encountered problems. I thus considered: $H(s)=\displaystyle\frac{s^2+1}{s^2(s^2-4s+9)}, F(s)=\displaystyle\frac{s^2+1}{s^2}$ and $G(s)=\displaystyle\frac{1}{s^2-4s+9}\Rightarrow f(t)=\delta_t+t$ and $g(t)=\displaystyle\frac{1}{\sqrt{5}}e^{2t}sin(\sqrt{5}t)$ so $\displaystyle h(t)=f(t)*g(t)=\int_{0}^tf(t-\tau)g(\tau)d\tau=\int_{0}^{t}\Big(\delta(t-\tau)+t-\tau\Big)\cdot\Big(\frac{1}{\sqrt{5}}e^{2\tau}sin(\sqrt{5}\tau)\Big)d\tau.$ At this point things are getting messy for me because I don't know what I can do with the Dirac function right now and what property to use for it. If possible a hint to continue. Thank you |
Posted: 06 Mar 2022 07:28 AM PST This is a strange question, based on a thought I had when dealing with integrals if given a variable $y$ such that $y=f(x)$ can we talk about the value of $y$ for a (separate) number $a$ if we were to substitute the value of $a$ for $x$ in the expression equal to $y$? Case 1: Would this imply that $y=y_a=f(x)=f(a)$ as $x=a$ for e.g. for every value of $x$ it must be equal to $a$ or would it be a separate value $y_a=f(a)$ that can differ from $x$ (so it would be 'hypothetical') and could differ from $y$ itself. Case 2: would it be that $y_a=f(a)$ is it's own independent value, and function of $a$ that would have the same value as $y$ when we want to assign the value of $a$ to $x$. Perhaps asking what happens if we take the value of $a$ for $x$ imply that we are substituting $a$ for $x$ and hence receive an expression where $y$ = $f(a)$ for all or some values of $x, a$ |
Posted: 06 Mar 2022 07:38 AM PST
Denote the triangle formed by $(0,0),(1,0),(0,1)$ as $K$. The trick here is apparently to show that $K \simeq X$ and then using the fact that $K$ is homeomorphic to $\Bbb S^1$ to deduce that $X \simeq \Bbb S^1$. With this I've managed to get the following. Define $f :X \to K$ as $$f(x) = \begin{cases} x, & x \in K \\ (0,1), &x \in \{0\} \times [1,\infty) \\ (1,0), & x \in [1,\infty) \times \{0\} \\ (0,0), & x\in \{0\} \times (-\infty, 0] \\ (0,0), &x \in (-\infty, 0] \times \{0\} \end{cases}$$ and define the inclusion $\iota :K \to X$. We know have that $f \circ \iota = id_K$ and I think I can define $h:K \times [0,1] \to X$ as $$h(a,t)=(1-t)(\iota \circ f)(a) + t \cdot id_X(a)$$ to show that $\iota \circ f \simeq id_X$? The problem I'm having is that I didn't know that $K$ is homeomorphic to $\Bbb S^1$. What is the map giving this homeomorphism? |
Every metric space is paracompact (an elegant proof) Posted: 06 Mar 2022 07:42 AM PST I'm studying a different proof to show that each metric space is paracompact in the book: Singh, Tej Bahadur-Introduction to Topology. It is a very elegant construction unlike the inductive method that seems more cumbersome to me. However there are three things that I could not understand: (1) Why can some $E_{n, \alpha}$ be empty? (2) How can I intuitively or geometrically see the sets $F_{n,\alpha}$ and $V_{n,\alpha}$? (3) Why is it obtained that $F_{n,\alpha} \subset X - U_\beta $ or $F_{n,\beta} \subset X - U_\alpha$ in the last part of the proof? I'm very interested in knowing a good argument for these questions since I have not been able to figure it out on my own after many attempts and I'm eager to know the answers. Any help is appreciated. Here are some definitions: Definition. Let $X$ be a topological space. A collection of sets $\{U_{\alpha}\} \subset X$ (not necessarily open or closed) is said to be locally finite if to each ${x \in X}$, there is a neighborhood ${U}$ of ${x}$ that intersects only finitely many of the ${U_{\alpha}}$ Definition. Let ${\left\{U_{\alpha}\right\} }$ be a cover of a space ${X}$. Then a cover ${\left\{V_{\beta}\right\}}$ is called a refinement if each ${V_{\beta}}$ sits inside some ${U_{\alpha}}$ Definition. A Hausdorff space is paracompact if every open covering has a locally finite refinement. |
Are these generalizations known in the literature? Posted: 06 Mar 2022 07:45 AM PST Using $$\int_0^\infty\frac{\ln^{2n}(x)}{1+x^2}dx=\frac{\pi}{2^{2n+1}}\lim_{s\to \frac12}\frac{d^{2n}}{ds^{2n}}\csc(\pi s)=|E_{2n}|\left(\frac{\pi}{2}\right)^{2n+1}\tag{a}$$ where the $n$- derivative is explained here and $$\text{Li}_{a}(-z)+(-1)^a\text{Li}_{a}\left(-\frac1z\right)=-2\sum_{n=0}^{\lfloor{a/2}\rfloor }\frac{\eta(a-2n)}{(2n)!}\ln^{2n}(z)\tag{b}$$ which follows from repeatedly integrating the common identity $$\text{Li}_2(-z)+\text{Li}_2\left(-\frac1z\right)=-\frac{\ln^2(z)}{2}-2\eta(2)$$ after dividing by $z$, I managed to find: $$\int_0^\infty\frac{\ln^{2n}(x)}{1+yx^2}dx=\frac{\left(\frac{\pi}{2}\right)^{2n+1}}{\sqrt{y}}\sum_{k=0}^n\binom{2n}{2k}|E_{2n-2k}|\pi^{-2k}\ln^{2k}(y)\tag{c}$$ $$\int_0^\infty\frac{\ln^{2n-1}(x)}{1+yx^2}dx=\frac{-\left(\frac{\pi}{2}\right)^{2n-1}}{2\sqrt{y}}\sum_{k=0}^{n-1}\binom{2n-1}{2k+1}|E_{2n-2k-2}|\pi^{-2k}\ln^{2k+1}(y)\tag{d}$$ $$\int_0^\infty\frac{\ln^{2n}(x)\text{Li}_{2a+1}(-x^2)}{1+x^2}dx=|E_{2n}|\left(\frac{\pi}{2}\right)^{2n+1}\zeta(2a+1)$$ $$-\frac{\left(\frac{\pi}{2}\right)^{2n+1}}{(2a)!}\sum_{k=0}^n \binom{2n}{2k}|E_{2n-2k}|\pi^{-2k}(2a+2k)!(2^{2k+2a+1}-1)\zeta(2k+2a+1)\tag{e}$$ $$\int_0^\infty\frac{\ln^{2n-1}(x)\text{Li}_{2a}(-x^2)}{1+x^2}dx=$$ $$-\frac{\left(\frac{\pi}{2}\right)^{2n-1}}{2(2a-1)!}\sum_{k=0}^{n-1} \binom{2n-1}{2k+1}|E_{2n-2k-2}|\pi^{-2k}(2a+2k)!(2^{2k+2a+1}-1)\zeta(2k+2a+1)\tag{f}$$ $$\sum_{k=1}^\infty\frac{(-1)^k H^{(2a+1)}_k}{(2k+1)^{2n+1}}=\frac{|E_{2n}|}{(2n)!}\left(\frac{\pi}{2}\right)^{2n+1}\zeta(2a+1)$$ $$-\frac{\left(\frac{\pi}{2}\right)^{2n+1}}{(2n)!(2a)!}\sum_{k=0}^n \binom{2n}{2k}|E_{2n-2k}|\pi^{-2k}(2a+2k)!(2^{2k+2a+1}-1)\zeta(2k+2a+1)\tag{g}$$ $$\sum_{k=1}^\infty\frac{(-1)^k H^{(2a)}_k}{(2k+1)^{2n}}=$$ $$\small{\frac{\left(\frac{\pi}{2}\right)^{2n-1}}{2(2a-1)!(2n-1)!}\sum_{k=0}^{n-1} \binom{2n-1}{2k+1}|E_{2n-2k-2}|\pi^{-2k}(2a+2k)!(2^{2k+2a+1}-1)\zeta(2k+2a+1)}\tag{h}$$ Question: Are the results of $(c)$ to $(h)$ known in the literature? If the reader is curious about the correctness of the results above and wants to verify them on Mathematica, the Mathematica command of $|E_r|$ is Abs[EulerE[r]] Thanks, |
Straight line passes through a circle Posted: 06 Mar 2022 07:31 AM PST In a rectangular coordinate system, there is a circle $x^2 + y^2 -12 x+4 y-9=0$ and a straight line. If the straight line passes through $A(-2,-3)$ and intersects the circle at point $B$ and $C$, find the value of $A B \times A C$. The given solution is this: $A B \times A C=\left(\sqrt{(-2)^{2}+(-3)^{2}-12(-2)+4(-3)-9}\right)^{2}=\left(\sqrt{16}\right)^{2}= \boxed{16}$ It looks like we simply substitute the values $(-2, -3)$ into the equation for the circle and find the square root of the result. How come? I don't quite understand. |
How to differentiate inverse of a matrix inside frobenius norm? Posted: 06 Mar 2022 07:34 AM PST Basically, I have an equation in the form $$ f = \left \| A^{-1} \right \|^2 $$ I need to differentiate the above equation wrt A. The matrix A is just a 2X2 matrix so I tried to solve it by brute force. It worked but the solution is very long. Not to mention, it is very time-consuming. I was wondering if there is a general method to solve this problem that will work for any order of matrix A. |
Volume of objects like hypercube / hypersphere : $V_{n}^{(m)}(r) = \dots$ Posted: 06 Mar 2022 07:48 AM PST I am looking for some general form of equation for calculating volume for specific geometry objects. The main idea is to find : $$ V_{n}^{(m)}(r) = \dots $$ Where: It's easy to find equation for hypersphere, it is : $$ \lim_{n \to \infty} V_{n}^{(m)}(r) = \pi^{\frac{m}{2}} \frac{1}{\Gamma(\frac{m}{2} + 1)} * r^m $$ For $m=2$, general equation is : $$ V_{n}^{(2)}(r) = \frac{1}{2} n \sin(\frac{2 \pi}{n}) * r^2 $$ Thanks to Platonic solids there is equation for $m=3$ and $n=3$, $n=5$ : $$ V_{3}^{(3)}(r) = \frac{8 \sqrt{3}}{27} * r^3 $$ $$ V_{5}^{(3)}(r) = \frac{2 \sqrt{3} (5 + \sqrt{5})}{9} * r^3 $$ In general it's easy to see that equation will have form : $$ V_{n}^{(m)}(r) = f(n, m) * r^m $$ Is it possible to find exact equation ? What worries me is limit in count of Platonic solids for $\text{3D}$ dimension case ($m=3$). |
Positive integer solutions to product Posted: 06 Mar 2022 07:28 AM PST For $a,b,c,d\in\mathbb{N}$, I am looking for all positive integer solutions to $$a(b+1)c(d+1)=(a+1)b(c+1)d.$$ I already figured out that $a=b$ and $c=d$ as well as $a=d$ and $b=c$. But now I am stuck and I am wondering how I can check for more solutions. How can I approach this? |
How to calculate start radius, end radius and angle for clothoid segments? Posted: 06 Mar 2022 07:41 AM PST I have a following construction: Clothoid -- Circle Arc -- Clothoid -- Clothoid which should form a smooth G2-continuous curve. Clothoids are parameterized with clothoid parameter $A$ and length $L$. For the circle, its radius and arc length are given. I should get the clothoids parameterized through start radius, end radius and angle. Can this be done? I tried with usual formulas that can be found on wikipedia and on this site. For the first clothoid i took a big start radius i.e. $R=\infty$ and went on from there, taking the last end radius as the next starting one and so on. This didn't work as I thought. |
Posted: 06 Mar 2022 07:20 AM PST Convex solids can have all sorts of symmetries:
Question: is there a list of pseudo-Catalan solids?
[Edits: clarified definitions as to reflect the content of the comments below)] |
How to differentiate a tensor Frobenius norm? Posted: 06 Mar 2022 07:42 AM PST I am wondering how to calculate $$\nabla_{\mathcal{T}} \|\mathcal{T}-\mathcal{C}\|_F^2$$ where $\mathcal{T}, \mathcal{C} \in \mathbb{R}^{n\times n\times n\times n}$ are $4$th order tensors. Any help would be appreciated. Thank you. |
Finding inverse probability density function. Posted: 06 Mar 2022 07:25 AM PST Let ${X \sim \!\, Exp(\lambda)}$, that is, ${X}$ is exponentially distributed with the probability density function ${f_X(x)= \lambda e^{-\lambda x}}, x≥0$. Determine the density and distribution functions for ${Y:=X^2}$ What i've done so far: Exponentially distributed function by definition ${F_X(x)= 1- e^{-\lambda x}}$ Let ${g(x)= 1- e^{-\lambda x}}$, which is a continuous monotone strictly growing function. Now I want to determine the density and distribution functions for ${Y:=g(X^2)=1- e^{-\lambda X}}$ ${x^2=y \leftrightarrow x= \sqrt{y}}$ here is where I get stuck. Well, not sure if om on the right track. The answer is ${F_Y(y)= 1- e^{-\lambda \sqrt{y}}}$ And obviously the density function will be the derivative of the distribution function. What needs to be done here?? thanks on beforehand. |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment