Saturday, January 22, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Characterize convex sets that are the convex hull of their extreme points

Posted: 22 Jan 2022 02:59 AM PST

The famous Krein-Milman theorem states that every compact convex set in a topological vector space is the convex hull of its extreme points. Note, however, that many convex sets exist in $\mathbb R^n$ which are not bounded but still the convex hull of their extreme points (e.g. the closed epigraph of a parabola), so I was wondering whether a true characterization (not a sufficient condition) for this property exists in finite-dimensional spaces: which closed convex sets are the convex hulls of their extreme points? All my search results lead to the Krein-Milman theorem which doesn't interest me. Due to the huge literature on convex sets I can't imagine that this hasn't been attempted before.

Reference for k-algebras in algebraic geometry

Posted: 22 Jan 2022 02:57 AM PST

I am currently learning about algebraic geometry and k-algebras are mentioned without (clear) definition, so the basics are assumed in the lecture. I tried to look the topic up, but did not find a "pleasent" overview.

In Lang's book he mentions modules, which I would try to avoid (if possible), as module theory is not connected with the lecture specifically. What I have seen in Abstract Algebra by Dummit and Foote was also not a specific section about the topic, and left me unsatisfied.

So I wanted to ask if someone knows a good reference where to look this stuff up, in view to algebraic geometry. It is not much I need, but to feel more comfortable I would like to have a reference at hand.

Thanks in advance.

Is it true if $\dim N(A-\lambda_i) = AM(\lambda_i)$ then the matrix is always diagonalizable?

Posted: 22 Jan 2022 02:56 AM PST

Imagine we look at a matrix A with the following characteristic polynomials:

$ \phi_A = (\lambda - 1) ^n$

So it has algebraic multiplicity of n. Only if there are $n$ corresponding eigenvectors associated with this eigenvalue, the matrix $A$ is diagonalizable.

What happens if we get the following characteristic polynomials?

$ \phi_A = (\lambda - 1) ^r (\lambda - 2)^k $, $r+k = n$

$ \phi_A = (\lambda - 1) ^r (\lambda - 2)^k (\lambda - 3)^l $, $r+k+l = n$

So in this case the matrix is only diagonalisable if the $\dim N(A-\lambda_i) = m(\lambda_i)$, where $m(\lambda_i)$ represents the algebraic multiplicity.

$\phi_A = (\lambda -1)(\lambda -2)(\lambda -3)(\lambda -4) .... (\lambda -n)$

The last one it is clear that the algebraic multiplicity and geometric multiplicity are equal. So this matrix must be diagonalizable.

Question:

So is this reasoning right? Is it true if $\dim N(A-\lambda_i) = AM(\lambda_i)$ then the matrix is always diagonalizable?

Probability - Expected Profit

Posted: 22 Jan 2022 02:59 AM PST

Specifications are given for the weight of a packaged product. The package is rejected if it is too heavy or too light. Historical data suggests that $0.95$ is the probability that the product meets weight specifications and $0.002$ is the probability that the product is too light. For each single packaged product, the manufacturer invests $20$ in production and the purchase price is $25$. For each $10000$ packages sold, what profit is received by the manufacturer if all packages meet weight specifications?

My solution in this problem is Profit = $(0.95)(10000)(25) - 0.95(10000)(20)$ Profit = $47 500$

This is what I got but the answer in the book is $50 000$, can someone help me what I did wrong in my solution?

Revocering 3D from inverse disparity and instinct matrix

Posted: 22 Jan 2022 02:50 AM PST

I am facing the challenge to recover the Z coordinate from an inverse disparity map and the instinct matrix K. The Z coordinate does not have to be metric but it has to be scale aware.

$$ K_{3\times 3} = \left[ {\begin{array}{cccc} 721 & 0 & 596 \\ 0 & 721 & 149 \\ 0 & 0 & 1 \\ \end{array} } \right] $$

I know that $z=f\frac{T}{d}$ where T is the distance between two co-planer cameras f is the focal length and d is the disparity. However, I am not told any extrinsic other than the camera is 1.5m up from the floor. The disparity is already an inverse disparity so I guess $z=f \times d$? Also, what do I do with the other parameters in my K matrix (596, 149)? Any help is welcome.

Let be $G$ a group of order $150$ with an element $g$ of order $25$, then $G$ has a subgroup $K$ of order $5$ which is normal and cyclic.

Posted: 22 Jan 2022 02:46 AM PST

Let be $G$ a group of order $150$ with an element $g$ of order $25$, then $G$ has a subgroup $K$ of order $5$ which is normal and cyclic.

Since $G$ has an element of order $25$, then $C_{25} \cong \: \langle \:g\: \rangle \:\leqslant G$.$\:$ Now $\langle \:g\: \rangle$ contains a subgroup of order $d$ for any $d$ divisor of $25$, in particular $\langle \:g^5\: \rangle \unlhd \langle \:g\: \rangle$, with $|g^5|=\frac{25}{(25,5)} = 5$,$\:$ so $\langle \:g^5\: \rangle \cong \: C_5 \: \leqslant G$.

We can observe that $\langle \:g\: \rangle$ is a $5-$sylow of $G$ and to conclude that $\langle \:g^5\: \rangle$ is normal in $G$ I want to show that $\langle \:g\: \rangle$ is normal in $G$. Can I deduce it directly by knowing that it is a cyclic $5-$sylow? Since being a cyclic subgorup of a group doesn't mean being normal (take $S_3$ for example).

Obviously if $n_5(G) = 1$ then $\langle \:g\: \rangle$ is normal, otherwise $n_5(G) = 6$ and if $P$, $Q$ are two different $5-$sylow of $G$, by order $U = P \cap Q$ has $5$ elements and it is easy to show that is a normal subgroup of $G$. Now $G/U$ has order $30$ and it has a normal $5-$sylow (we have proved this in class), so by the first homomorphism theorem and correspondence theorem on the canonical map, this means thet $G$ has ha normal subgroup of order $25$ and it has to be $\langle \:g\: \rangle$ ($n_5(G) = 1$, retrospectively).

Why do $SO(n,\mathbb{R})$ and $O(n,\mathbb{R})$ have the same Lie algebra?

Posted: 22 Jan 2022 02:48 AM PST

If we define $O(n,\mathbb{R})=\{A\in GL(n,\mathbb{R}):A^tA=I\}$ and $SL(n,\mathbb{R})=\{A\in GL(n,\mathbb{R}):\det(A)=1\}$ then $SO(n,\mathbb{R})=SL(n,\mathbb{R})\cap O(n,\mathbb{R}).$

I know that the Lie Algebra of $O(n,\mathbb{R})$ comprises of skew-symmetric matrices and that the Lie Algebra of $SL(n,\mathbb{R})$ comprises of matrices with zero trace. I was guessing that the Lie Algebra of $SO(n,\mathbb{R})$ then should comprise of skew-symmetric matrices with zero trace. However, this is not the case.

Question 1: How do I show that the Lie algebra of $SO(n,\mathbb{R})$ is the same as the Lie Algebra of $O(n,\mathbb{R})?$

I also noticed that on the other hand the Lie Algebra of $SU(n,\mathbb{C})$ comprises exactly of the skew-Hermitian matrices with zero trace. I am guessing an answer to the first question should explain why this is true. Still I wonder

Question 2: Under what conditions do we have that $N=H\cap G$ implies $\mathfrak{n}=\mathfrak{h}\cap \mathfrak{g}$ where $N,H,G$ are Lie Groups and $\mathfrak{n},\mathfrak{h}$ and $\mathfrak{g}$ are their associated Lie Algebras?

Why does $\int \frac{1}{\sqrt u}du \neq \ln(\sqrt u)$ hold true?

Posted: 22 Jan 2022 02:46 AM PST

I do know the integral of the function written in the title. My question is a different one. I do not understand why this holds true : $\int \limits_{}^{}\frac{1}{\sqrt[]{u}}{du}\neq\ln(\sqrt{u})$. As far as I understand the derivative of $\ln({x})$ is $\frac{1}{x}$. I also know that $\int \limits_{}^{}\frac{1}{x}{du} = \ln(|x|)$ holds true.

What am I not seeing? Where do I think wrong?

Inverse mellin transform of $\Gamma^3(s)$

Posted: 22 Jan 2022 02:32 AM PST

Is the inverse mellin transform of $\Gamma^3(s)$ known? Mathematica tells that inverse mellin transform of $\Gamma^2(s)$ is $2 K_0(2\sqrt{x})$ where $K_0(x)$ is the bessel function. I would like to know if inverse mellin transform of $\Gamma^3(s)$ exists in terms of special functions other that Meiger $G$ function.

Proving that $\sum\limits_{i=1}^{2006}f(i/2007)=1003$ if $f(x)=2008^{2x} /(2008 + 2008^{2x})$ from the 10th PMO

Posted: 22 Jan 2022 02:35 AM PST

From the 10th Philippine Mathematical Olympiad:

Let $f$ be the function defined by $$f(x) = \frac{2008^{2x}}{2008 + 2008^{2x}}, \qquad x \in \mathbb{R}.$$ Prove that $$f\left(\frac{1}{2007}\right) + f\left(\frac{2}{2007}\right) + \cdots + f\left(\frac{2005}{2007}\right) + f\left(\frac{2006}{2007}\right) = 1003.$$

The given solution was:

We first show that the function satisfies the identity $f(x) + f(1 - x) = 1.$ \begin{align*}f(1 - x) &= \frac{2008^{2(1-x)}}{2008 + 2008^{2(1 - x)}} \\\\ f(1 - x) &= \frac{2008^2 2008^{-2x}}{2008 + 2008^2 2008^{-2x}} \\\\ f(1 - x) &= \frac{2008}{2008^{2x} + 2008} \\\\\\\\ f(x) + f(1 - x) &= \frac{2008^{2x}}{2008 + 2008^{2x}} + \frac{2008}{2008^{2x} + 2008} \\\\ f(x) + f(1 - x) &= 1\end{align*} Pairing off the terms of the left-hand side of the desired equality into $$\left[f\left(\frac{1}{2007}\right) + f\left(\frac{2006}{2007}\right)\right] + \cdots + \left[f\left(\frac{1003}{2007}\right) + f\left(\frac{1004}{2007}\right)\right],$$ and applying the above identity solve the problem.


Now, this is not easily noticeable, at least for me. What I tried to do was approximate the sum through the integral $$I(a,b,c,d) = \int_c^d \frac{a^x}{a^x + b}\,dx$$ which, when solved, is equal to $$I(a,b,c,d) = \frac{1}{\ln a}\ln\left(\frac{a^d + b}{a^c + b}\right)$$ as I thought that $2006$ subdivisions is enough for the integral to approximate. The result of the integral will be rounded up to compensate for the difference. The given values in the problem are $a = 2008^{2/2007}$, $b = 2008$, $c = 1$, and $d = 2006$. By substitution, it should give us $$I = \frac{1}{\ln(2008^{2/2007})}\ln\left(\frac{2008^{4012/2007} + 2008}{2008^{2/2007} + 2008}\right).$$ Solving this gives us $$I = \frac{2005}{2} = 1002.5$$ and rounding this up gives us $1003$ as desired.1


My questions will be: How valid is this solution, and are there alternative solutions that does not use the given one?


Additional information:
[1] Solution for $I = \frac{1}{\ln(2008^{2/2007})}\ln\left(\frac{2008^{4012/2007} + 2008}{2008^{2/2007} + 2008}\right)$

We know that $\ln (p/q) = \ln p - \ln q$. Hence, $$\ln\left(\frac{2008^{4012/2007} + 2008}{2008^{2/2007} + 2008}\right) = \ln(2008^{4012/2007} + 2008) - \ln(2008^{2/2007} + 2008).$$ Let $u = 2008^{4012/2007} + 2008$ and let $v = 2008^{2/2007} + 2008$. This means that we are solving for $\ln u - \ln v$.

Notice that $u = 2008^{4012/2007} + 2007$ is equivalent to $u = 2008(2008^{2005/2007} + 1)$. Then, $$\ln u = \ln 2008 + \ln(2008^{2005/2007} + 1).$$ Also, $v = 2008^{2/2007} + 2008$ is equivalent to $v = 2008(2008^{2005/2007} + 1)(2008^{-2005/2007})$. Then, \begin{align*}\ln v &= \ln 2008 + \ln(2008^{2005/2007} + 1) - \ln(2008^{-2005/2007}) \\ \ln v &= \ln 2008 + \ln(2008^{2005/2007} + 1) - \frac{2005}{2007}\ln 2008.\end{align*} Solving for $\ln u - \ln v$, \begin{align*}\ln u - \ln v &= \ln 2008 + \ln(2008^{2005/2007} + 1) - \left(\ln 2008 + \ln(2008^{2005/2007} + 1) - \frac{2005}{2007}\ln 2008\right) \\ \ln u - \ln v &= \ln 2008 + \ln(2008^{2005/2007} + 1) - \ln 2008 - \ln(2008^{2005/2007} + 1) + \frac{2005}{2007}\ln 2008 \\ \ln u - \ln v &= \frac{2005}{2007}\ln 2008\end{align*} We are solving for $$I = \frac{1}{\ln(2008^{2/2007})}\ln\left(\frac{2008^{4012/2007} + 2008}{2008^{2/2007} + 2008}\right).$$ We know that $$\ln\left(\frac{2008^{4012/2007} + 2008}{2008^{2/2007} + 2008}\right) = \frac{2005}{2007}\ln 2008,$$ hence, \begin{align*}I &= \frac{1}{\ln(2008^{2/2007})}\cdot\frac{2005}{2007}\ln 2008 \\ I &= \frac{2005}{2007}\cdot \frac{1}{\frac{2}{2007}\ln 2008}\cdot \ln 2008 \\ I &= \frac{2005}{2007} \cdot \frac{2007}{2} \\ I &= \frac{2005}{2}. \qquad \blacksquare\end{align*}

Prove that the lemniscate is not a smooth manifold

Posted: 22 Jan 2022 02:55 AM PST

Let $M$ be the lemniscate $y^2=4x^2-4x^4$:

enter image description here

I want to show that $M$ cannot be a smooth manifold.

Consider $M$ with the topology that makes $M$ a topological subspace of $\mathbb R^2$. It is pretty clear that $M$ cannot admit a local chart around the point $(0,0)$. If so, a neighborhood of $U=B_\varepsilon(0,0)\cap M$ should be homeomorphic to some open interval $(a,b)$ in $\mathbb R$. If we remove the point $(0,0)$ from $U$ and $\varphi(0,0)$ from $(a,b)$, we get that $U-\{(0,0)\}$ has four connected components and $(a,b)-\{\varphi(0,0)\}$ has only two connected components.

This proves that $M$ with the topology induced by $\mathbb R^2$ cannot be a smooth manifold.

My question is: How can I show that $M$ is not a manifold for any topology in $M$?

Probability of drawing three balls in a row of the same colour

Posted: 22 Jan 2022 02:38 AM PST

What is the probability of getting 3 white balls in a draw of 3 balls from a box containing 15 balls? (The answer should be 1/13) (The odds of 2 white ball in a row is 1/5)

I tried to do something like this 1/5 of 15 = (15: 5) × 1 = 3 But that is wrong as the answer should be 1/13

How do I prove that there are no retractions from $S^1\times \bar{B^2}$ and $A$?

Posted: 22 Jan 2022 02:36 AM PST

I want to show that there is no retraction from $S^1\times \bar{B^2}$ to $A$. Where A is the following:

enter image description here

I wanted to do this by contradiction so assuming that we have a retraction $r:S^1\times \bar{B^2}\rightarrow A$. Now first we know that then $$r_*:\Pi_1(S^1\times \bar{B^2})\rightarrow \Pi_1(A)$$ is a surjection. But this doesn't help me since I only know that $\Pi_1(S^1\times \bar{B^2})=\Bbb{Z}$ but I don't know anything about A. Therefore I wanted to use the equivalent characterisation of a retraction saying that $r\circ \iota=id_A$ where $\iota:A\rightarrow S^1\times \bar{B^2}$ is the inclusion map. But also there I got stuck because I only can consider $(r\circ \iota)_*:\Pi_1(A)\rightarrow \Pi_1(A)$.

Therefore I wanted to ask if someone could help me?

Thanks for your help.

If diagonalizable matrices are not dense over $\Bbb R$, how common are they?

Posted: 22 Jan 2022 02:36 AM PST

The link: The diagonalizable matrices are not dense in the square real matrices says that diagonalizable matrices are dense over $\Bbb C$ but not over $\Bbb R$. If that's the case, then the logical question is, what percentage of matrices over $\Bbb R$ are diagonalizable? The answer is going to be "it depends", so let's make it concrete. We pick each element of an $n \times n$ matrix from i.i.d uniform distributions over $[-1,1]$. What is the chance we'll get a diagonalizable matrix (over the reals)?

Type Theory: we cannot prove double negation, but can we prove it is unprovable?

Posted: 22 Jan 2022 02:48 AM PST

I'm currently trying to learn type theory from the first chapter of HoTT. It is remarked that we cannot prove $\neg\neg A \rightarrow A$, when $A$ is interpreted as a proposition, or, equivalently, we cannot construct an element of $((A\rightarrow\mathbf{0})\rightarrow \mathbf{0})\rightarrow A$, when $A$ is interpreted as a type. However, can we prove from within type theory that this is unprovable? In other words, can we construct an element of the following type?

$$\Bigg(\prod_{A:\mathcal{U}} \big(((A\rightarrow\mathbf{0})\rightarrow \mathbf{0})\rightarrow A\big)\Bigg)\rightarrow \mathbf{0}$$

If so, what is such an element? I've been struggling to explicitly construct one myself, to no avail.

An exercise for positive map on operator system

Posted: 22 Jan 2022 02:34 AM PST

Definition. Let $A$ be a unital C*-algebra. If $A$ has a unit $1$ and $S$ is a self-adjoint ($S^*=S$) subsapce of $A$ containing $1$, then we call $S$ an $operator$ $system$.

Question: Let $S$ be an operator system, $B$ be a C*-algebra, and $\phi: S\rightarrow B$ be a positive map. How to prove that $\phi$ extends to a positive map on the norm closure of $S$?

This is an exercise from Page 21 of the book Completely Bounded Maps and Operator Theory by V. Paulsen.

$A^T$ has the same Smith normal form as $A$

Posted: 22 Jan 2022 02:38 AM PST

Let $A\in R^{n\times m}$ be a matrix over a PID $R$. Show that $A$ and its transpose $A^T$ have the same Smith normal form, meaning that the ideals $\{0_R\}\subsetneq \langle d_r \rangle \subseteq \dots \subseteq \langle d_2 \rangle \subseteq \langle d_1 \rangle$ and $r$ are the same for both matrices.


My attempt is pretty fast and "too easy" that's why I want to check if it's correct and I didn't miss anything.

Let $A \in R^{n\times m}$ and $S,T$ matrices such that $SAT=\text{diag}(d_1,...,d_r)$. Then: \begin{align*} \Rightarrow \quad A = S^{-1}\text{diag}(d_1,...,d_r)T^{-1} \end{align*} in which $S^{-1}, T^{-1}$ exist since they are invertible by definition. \begin{align*} \Rightarrow \quad A^T &= (S^{-1}\text{diag}(d_1,...,d_r)T^{-1})^T = (T^{-1})^T (\text{diag}(d_1,...,d_r))^T(S^{-1})^T \\ &=(T^{-1})^T \text{diag}(d_1,...,d_r)(S^{-1})^T \end{align*} because the transpose of a matrix with entries only on its main diagonal stays the same. So $A^T$ seems to have the same invariants $d_1,...,d_r$ as $A$.

Is this correct or am I missing something?

Approximately solve the equation. Find the first two terms of the approximation.

Posted: 22 Jan 2022 02:33 AM PST

Approximately solve the equation. Find the first two terms of the approximation.

By $a>>1$ and $a<<1$

$\ln x=e^{-ax}$

$x=e^{e^{-ax}}$

$x_{0}\sim1$

$x_{1}=e^{e^{-a}}$

$\left|e^{e^{-a}}-1\right|<<1$

$x=1+e^{-a}+\frac{e^{-2a}}{2}$

Have I considered one case correctly and how to proceed for the second?

Which of the two quantities $\sin 28^{\circ}$ and $\tan 21^{\circ}$ is bigger .

Posted: 22 Jan 2022 02:48 AM PST

I have been asked that which of the two quantities $\sin 28^{\circ}$ and $\tan 21^{\circ}$ is bigger without resorting to calculator.

My Attempt:

I tried taking $f(x)$ to be

$f(x)=\sin 4x-\tan 3x$

$f'(x)=4\cos 4x-3\sec^23x=\cos 4x(4-3\sec^23x\sec 4x)$

but to no avail.

I also tried solving $\tan^2 21^{\circ}-\sin^228^{\circ}=\tan^2 21^{\circ}-\sin^221^{\circ}+\sin^221^{\circ}-\sin^228^{\circ}=\tan^2 21^{\circ}\sin^221^{\circ}+\sin^221^{\circ}-\sin^228^{\circ}$

but again no luck.

There doesn't appear to be a general way of doing this

Integral submanifolds of an involutive manifold over $M$ posess an unique structure of smooth submanifold of $M$

Posted: 22 Jan 2022 02:48 AM PST

Consider a smooth manifold $M$, an involutive distribution $D$ over $M$, and $N\subset M$. We need to prove that if $N$ admits a structure of integral submanifold of $D$, then $N$ admits an unique structure of smooth manifold of $M$.

I'm sure that the Global Frobenius Theorem has to be used somewhere, because we can rewrite it as "any involutive distribution has asociated one and only one foliation", which is similar to what I want to achieve. The Local Frobenius Theorem also assures us that $D$ has to be integrable, but I don't know if this is of any help.

Could anyone please help me out? All of these concepts are very new to me, so any lead of what to do would be helpful (wouldn't like to have the full answer, just a hint).

Integral curves of a vector field not tangent to an embedded submanifold $S$

Posted: 22 Jan 2022 02:52 AM PST

I got stuck on a problem. Let $M$ be a smooth $n-$dimensional manifold and let $S$ be a compact embedded submanifold. Suppose $V$ is nowhere tangent to $S.$ Prove that there exists $\epsilon>0$ such that the flow of $V$ restrict to a smooth embedding $$\Phi:(-\epsilon,\epsilon)\times S \to M.$$ Firstly, by the compactness of $S,$ we find a positive $\epsilon>0$ for which all the integral curves with initial conditions in $S$ are defined on $(-\epsilon,\epsilon).$ The differential of $\Phi$ if I am not mistaken should be of this form: $$(\frac{\partial}{\partial t}, di_{S}),$$ where $i_{S}$ is the inclusion of $S$ in $M,$ practically by definition this should have maximum rank. The only part I'm not sure is on how to prove that $\Phi$ is open. Since $S$ is embedded for every point $y \in S$ there is a chart $(U,\varphi)$ in $M$ in which $U\cap S$ is diffeomorphic under $\varphi$ to a set of the form $$\{ x \in \mathbb{R}^{N} : (x^1,\dots,x^{k},\,0, \dots,\,0) \}.$$ Since $V$ is nowhere tangent to $S$ it's not restrictive to suppose the differential of $\varphi$ sends $V$ to the constant vector field $e_{k+1}.$ So the integral curves with initial conditions on $S$ should all be of the form
$$\gamma(t)=(x^1,\dots,x^{k},t,\dots,0).$$ So the set $\Phi(S\times (-\epsilon,\epsilon))$ is diffeomorphic to the following relative open set of $\mathbb{R}^n$ $$ \{(x^1,\dots,x^{k},t,\dots,0): ((x^1,\dots,x^{k}) \in \mathbb{R}^{k+1}, \, t \in (-\epsilon,\epsilon) \},$$ so the map $\Phi$ is open on is image and we are done. Is this reasoning correct? Where should I be more precise?

Is $f^{-1}\big( \sqrt{xf(x)} \big)$ convex (for large $x$) when $f(x) = o(x)$ is concave and strictly increasing?

Posted: 22 Jan 2022 02:50 AM PST

Question

Let $f : (N, \infty) \to \mathbb (0,\infty)$ be a strictly increasing, concave function such that $f(x) \to \infty$ and $f(x) = o(x)$ for $x \to \infty$, where $N > 0$. Also, let $f$ be twice differentiable (or more if needed).

Define $g(x):= f^{-1}\big(\sqrt{xf(x)} \big)$ where the definition makes sense (i.e. for all $x$ such that $\sqrt{x f(x)} > f(N)$).

Does it follow that $g$ is convex for large $x$? That is, if the restriction $g|_{[K, \infty)}$ is convex for some $K>0$.

Also: I'd like to know if $h(x):= g(x)/x$ is increasing for large $x$.


Thoughts

The idea is that the geometric mean $\sqrt{x f(x)}$ sort of pulls $f$ up toward the 45 degree line, straighening it out a bit. This is still concave (which isn't too hard to show, knowing the geometric mean is a concave function), but "less" so. Applying the convex $f^{-1}$ to $f(x)$ yields $x$, sort of cancelling out the concavity. But applying $f^{-1}$ to $[xf(x)]^{1/2}$ should yield a convex function, since we already brought $f(x)$ a bit in this direction with the geometric mean.

What actually growth of a function represents?

Posted: 22 Jan 2022 02:47 AM PST

I am going through the book Discrete Mathematics by Kenneth Rosen. In chapter 3, there is a topic "The Growth of Functions" and then Big-O Notation. I am really unable to understand what actually Growth of Functions represents? Kindly share some insights with examples. That would be of great help. Thank you so much for the help.

PS: I am new to this topic.

Distributing $r$ balls into $n$ cells. What is the probability that exactly $m$ cells contain exactly $k$ balls?

Posted: 22 Jan 2022 02:56 AM PST

$r$ balls are randomly distributed into $n$ cells (the balls are indistinguishable).

What is the probability that there is exactly $m$ cells that contains exactly $k$ balls (each one)? That is, the rest of the cells must contain $j \neq k$ balls, each one.

My attempt was:

$$\frac { {n \choose m} {{r-mk+n-m-1} \choose {r-mk}} } { {{r+n-1} \choose {n-1}} } $$

Explain: we distribute $r$ indistinguishable balls into $n$ cells, so that's $|\Omega|={{r+n-1} \choose {n-1}}$ in the denominator.

In the numerator, I choose $m$ cells and puts inside each one of them $k$ balls (that make $mk$ balls in total. we have ${n \choose m}$ possibilities of doing that). Then, the rest of the $r-mk$ balls are distributed to the $n-m$ remaining cells (that's the ${{r-mk+n-m-1} \choose {r-mk}}$).

I believe my mistake was that i didn't made sure that the rest of the $n-m$ cells does not containing $k$ balls as well, so the numerator is "too big". But i have no idea how to fix this.

Relationship between a matrix's spectral norm and its $\infty$-norm

Posted: 22 Jan 2022 02:37 AM PST

We know that if $x$ is a vector of length $n$ we have

$$\|x\|_{\infty} \leq \|x\|_2 \leq \sqrt{n}\|x\|_{\infty}$$

we also know that:

$$\|A\|_p=\max_{\|x\|_p \neq0}{\frac{\|Ax\|_p}{\|x\|_p}} = \max_{\|x\|_p=1}{\|Ax\|_p}_{}$$

but how we can use this information to prove that for an $m \times n$ matrix we have:

$$\sqrt{\frac{1}{m}}\|A\|_2 \leq \|A\|_{\infty} \leq \sqrt{n}\|A\|_2$$

In my search for finding answer, I find that this is proved using Cauchy-Schwarz inequality but I don't know how exactly this is done and I can not understand to Cauchy-Schwarz inequality either.

Suppose that $A$ is a $4\times 4$ matrix and $A^3=2A$. Prove that $\det(A)=0$ or $\det(A)=\pm 4$

Posted: 22 Jan 2022 02:59 AM PST

Suppose that $A$ is a $4\times 4$ matrix and $A^3=2A$. Prove that $\det(A)=0$ or $\det(A)=\pm 4$.


I already proved that $\det(A)=\pm 4,$ but I'm having problems when proving that $\det(A)=0.$

Prove column space of product of matrices is a subset

Posted: 22 Jan 2022 02:39 AM PST

Let $A, B$ be matrices such that the product $AB$ exists. Is the column space of $AB$ a subset of the column space of $A$?

This is true, but how do I show it?

Attempt:

Let $m_1, ... , m_k$ be the columns of $B$, it follows that

$AB = \begin{bmatrix} Am_1 & ... & Am_k \end{bmatrix}$

$\implies \text{col } (AB) = \text{span } \{Am_1, ... , Am_k \}$

But how do I show it is a subset of $\text{col } (A)$?

What can we say about connected groups with the same Lie algebra?

Posted: 22 Jan 2022 02:50 AM PST

Each (finite-dimensional) Lie algebra has exactly one simply connected Lie group associated to it (up to isomorphism). What can we say about all other connected groups with the same Lie algebra ?

Thank you in advance

How to construct the matrix Lie group from the corresponding Lie algebra, using the exponential map?

Posted: 22 Jan 2022 02:58 AM PST

Let $\frak{g}$ be the (real) Lie algebra:

$${\frak g}=\{A =M_n(\mathbb{R}) \mid \text{tr}(A)=0\}$$

with Lie bracket $[x,y]=xy-yx$ for $x,y\in \frak g$

Than how do I construct the corresponding Lie group by using the exponential map?

For any positive integers $a,b$, one has $a^4|b^3$ implies $a|b$?

Posted: 22 Jan 2022 02:40 AM PST

This is an old exam problem I found online. For any positive integers $a,b$, one has $a^4|b^3$ implies $a|b$. Clearly if $a^4|b^3$, then $a|b^3$. It seemed simple on first reading, but I can't figure out how to show that $a|b$ follows, since $a$ is not necessarily prime. Is there some observation I'm missing?

No comments:

Post a Comment