Recent Questions - Mathematics Stack Exchange |
- Why is a monad on self C equivalent to a strong monad on C
- Expectation and Variance of sum of dependent variables
- What is the greatest possible perimeter of a right-angled triangle with integer side lengths if one of the sides has length 12?
- How to interpret the equation of motion of an inverted pendulum on a cart?
- Where Does the $2$ in Ricci Flow Come From?
- Prove that when two different lines intersect, then they lie in exactly one plane!
- The geometric intepretation of orthogonal matrix $Q^{T}Q=I$
- Factorise $x^{17}-x$
- How to calculate linear approximations for 3.97^2.5
- How to calculate $x$ for $2^{2^{x}} = 36 \cdot 10^{14}$?
- Finding the total number (state space) of non-equivalent matrices
- Get the Transformation of two Coordinate systems by knowing the Pose of one Point in both of them!
- Let $A,B$ be $2\times 2$ matrices, $AB-BA=A^2$, and $\mathrm{Tr}(B) = 0$. Show that $B^2=0$.
- What does population proportion mean in statistics?
- Reformulation :why diferential operators can be basis
- Problem with Bertrand's ballot theorem
- Binomial distribution probability of defective product is 0.01
- Minimize a quadratic form over $\mathbb{Z}$, where the matrix is positive-definite and symmetric
- In how many ways can you arrange three men and three women if a single man cannot be between two women (in a row of six chairs)?
- Is the derivative of the conditional expectation of Y on X equal to the least squares projection?
- Complement of an angle
- Are all real and imaginary numbers complex numbers?
- How do I determine the lattice points of the convex hull of a set of {0,1}^n points?
- Bachelier model option pricing
- Galois extension of $K$ generated by the roots of polynomials $f(x_1,x_2, \cdots, x_n) \in K[x_1,x_2, \cdots, x_n]$.
- About the inequality $x^{x^{x^{x^{x^x}}}} \ge \frac12 x^2 + \frac12$
- How was the Ornstein–Uhlenbeck process originally constructed?
- Continuous mapping from matrices to eigenvalues
- How to compute the eigenvalue condition number of a matrix
Why is a monad on self C equivalent to a strong monad on C Posted: 18 Jul 2021 07:44 PM PDT In Chapter 12 of Call-by-Push-Value, Levy states that a strong monad on a Cartesian category $\mathcal C$ is equivalent to a monad in the 2-category of $[\mathcal C^{\mathit{op}},Sets]$-enriched categories on the simple fibration $\mathit{self}~\mathcal C$ over $\mathcal C$. I had trouble laying out the details of this and am wondering if someone can give me some help. A $[\mathcal C^{\mathit{op}},Sets]$-enriched endofunctor on $\mathcal {self}~\mathcal C$ consists of
To show that this is a strong functor on $\mathcal C$, we must first find a family of $\mathcal C$-arrows $\mathit{t}_{X,Y} : X \times TY \to T(X \times Y)$. Given $X,Y \in \mathcal C_0$, we have $id_{X \times Y} : X \times Y \to X \times Y$. Mapping this through $(T_{Y,X \times Y})_X$ gives an an arrow $t_{X,Y} : X \times TY \to T(X \times Y)$. We now need to prove that the family $t_{X,Y}$ is a natural transformation from $$\mathcal - \times T- : \mathcal C^2 \to \mathcal C$$ to $$T(- \times -) : \mathcal C^2 \to \mathcal C$$ I'm struggling to figure this out. |
Expectation and Variance of sum of dependent variables Posted: 18 Jul 2021 07:43 PM PDT Q. Let X be a discrete random variable such that X = 0 with probability 0.5 and X = 1 with probability 0.5. Let Y be a discrete random variable such that Y = 1 when X = 0 and Y = 0 when X = 1. What is the mean and variance of X + Y? My Sol: E(X+Y)=E(X)+E(Y)=0.5+[Σyp(y|x)]=0.5+[0.p(y=0|x=0)+0.p(y=0|x=1)+1.p(y=1|x=0)+1.p(y=1|x=1)]=0.5+0.5=1 Var(X+Y)=Var(X)+Var(Y)+2cov(X,Y)=?? |
Posted: 18 Jul 2021 07:42 PM PDT
I know this question can be done in a variety of ways, the answer comes out to be $84$. However, my friend and I were trying this question using QM$\ge$AM, and we didn't get $84$ through that, so I decided to ask here. Let $x,y,z$ be the sides, and $x=12$. We have: $$\sqrt{\frac{x^2+y^2+z^2}{3}} \ge \frac{x+y+z}{3}$$ And $x,y,z$ are +ve because length can't be negative. Since we want the maximum value the perimeter, the equality of the expression holds at $x=y=z$ plugging in the values, I get the maximum perimeter as $36$. Why am I getting the wrong answer solving this way? I would be grateful if someone helped. Thanks. |
How to interpret the equation of motion of an inverted pendulum on a cart? Posted: 18 Jul 2021 07:42 PM PDT The schematic for an inverted pendulum on a cart is shown below: The forces in the free body diagram of the pendulum in the horizontal direction is given as : $N=m \ddot{x}+m l \ddot{\theta} \cos \theta-m l \dot{\theta}^{2} \sin \theta$ However, I don't know where $-m l \dot{\theta}^{2} \sin \theta$ comes from. Is this an inertial force somehow? |
Where Does the $2$ in Ricci Flow Come From? Posted: 18 Jul 2021 07:41 PM PDT I started learning about Ricci flow recently, which is always given as $$ \frac{\partial g}{\partial t}=-2\textrm{Ric}. $$ It would seem more natural to me to define Ricci flow instead by the equation $$ \frac{\partial g}{\partial t}=-\textrm{Ric}, $$ which omits the $2$. The only real difference in behavior between this flow and Ricci flow is that this one flows at half the rate. Wikipedia claims that the choice of $2$ in the equation is an arbitrary convention, but this is hard for me to stomach, since conventions in math are almost always motivated by something. Is there a good reason that Ricci flow is defined the way it is, or is the convention really arbitrary? |
Prove that when two different lines intersect, then they lie in exactly one plane! Posted: 18 Jul 2021 07:25 PM PDT Let two lines $l$ and $m$ intersect, then they lie in the same plane $V$. However how to prove that the plane $V$ is unique that is if $V'$ contain line $l$ and line $m$, then $V'=V$? I have tried with proof by contradiction but couldn't proceed. |
The geometric intepretation of orthogonal matrix $Q^{T}Q=I$ Posted: 18 Jul 2021 07:18 PM PDT For the inverse matrix, we have $Q^{-1}Q=I$, which means that if we multiply $Q^{-1}$, the matrix first rotate and rotate back (just an example). I am writing to ask whether there is intuition to understand $Q^{T}Q=I$ |
Posted: 18 Jul 2021 07:12 PM PDT Lets say I have an expression: $$x^{17}-x$$ I have to factorise it. What I did: Removed common $x$ outside the expression:$$x(x^{16}-1)$$ What I observed is $x^{16}-1$ can be written as $(x^4)^2-1^2$. So applying the identity of difference of squares $a^2-b^2=(a+b)(a-b)$, I got the answer as $$x(x^4+1)(x^4-1)$$ The last term is a perfect square too. So I again factorised it: $x(x^4+1)(x^2+1)(x^2-1)$ Then Again: $$x(x^4+1)(x^2+1)(x+1)(x-1)$$ But when I used wolfram - alpha to check, I got $$x(x-1)(x+1)(x^2+1)(x^4+1)(x^8+1)$$ Where did I go wrong? Also, can we write that
|
How to calculate linear approximations for 3.97^2.5 Posted: 18 Jul 2021 07:06 PM PDT How do I calculate the linear approximation for 3.97^2.5? How do I apply the following formula when I don't know what f(x) would be here? Thank you. f(x + delta x) ~ f(x) + f'(x) * delta x |
How to calculate $x$ for $2^{2^{x}} = 36 \cdot 10^{14}$? Posted: 18 Jul 2021 07:19 PM PDT I need to calculate $x$ for this: $$2^{2^{x}} = 36 \cdot {10^{14}}$$ I applied $\log_2$ on both sides and got: $$2^x = \log_2(36) + \log_2 (10^{14})$$ How should I proceed to get an integer answer of $x$? |
Finding the total number (state space) of non-equivalent matrices Posted: 18 Jul 2021 07:15 PM PDT Given a general matrix of dimensions $(x, y)$, in which every element can have an $N$ number of discrete values, I need to find the number of all the non-equivalent matrices. Equivalent matrices are defined as ones that can be achieved from the same matrix through interchanging rows and/or columns. The context for this is a Computer Science course - so I'm expected to iterate, and probably sum over different parts (for instance, iterating over $N$ - because it also includes the possibility for only $[1, N-1]$ unique elements). Iterating over the matrices directly cannot be done, due to a significantly large $(x, y)$ and $N$ The part I'm struggling with is how to describe the amount of equivalent matrices - because I know that the possible amount of unique matrices is $N^{(x \cdot y)}$, and just subtracting it from there would get me the answer I'm looking for - but I don't actually know how to describe that dependence mathematically. Alternatively, I was trying to get the amount of non-equivalent matrices directly, but without success. The most I could make of it is: $$\Omega={N \choose x \cdot y }\sum_{i=1}^N({{x \cdot y \choose i} / \prod_{j=1}^i{j!})}$$ Where $N \choose x \cdot y$ are duplications of the same arrangement with different values (for instance substituting (0, 1) with (1, 2), ${x \cdot y \choose i}$ is the amount of possibilities to populate the matrix, and $\prod_{j=1}^i{j!}$ is the overcounting correction factor. I couldn't reduce the problem to 1 dimension, as there is some correlation between certain arrangement of lines and rows. I have tried simplifying the problem by generating a list of occurrences per value, given the amount of unique values - and plugging those values into the inner sum - but that doesn't seem to work either. Any help in improving the formula, defining the constraints on the matrix space, or how should I approach this problem in general would be much appreciated. |
Get the Transformation of two Coordinate systems by knowing the Pose of one Point in both of them! Posted: 18 Jul 2021 07:00 PM PDT So I have two Coordinate Systems A and B. And I do not know the Transformation between them. Now I have the Pose P, consisting of its Position (x,y,z) and Rotation (R-Matrix) in both Coordinate Systems (Pa, Pb). How can I get the Transformation from A to B, so that I could transform the Pose of Pa2 from coordinate frame B, to A! My Idea: So for the translation it seems easy: t = (Pxb, Pyb, Pzb) - (Pxa, Pya, Pza) Pb2 = Pa2 + t But How would I do it for the Rotation? Rpa: Orientation of P with respect to Coordinate Frame A Rpb: Orientation of P with respect to Coordinate Frame B Rp2b = Rbp * Rpa * Rp2a = (Rpb)⁻¹*Rpa * Rp2a ??? |
Let $A,B$ be $2\times 2$ matrices, $AB-BA=A^2$, and $\mathrm{Tr}(B) = 0$. Show that $B^2=0$. Posted: 18 Jul 2021 07:28 PM PDT Let $A,B$ be $2\times 2$ matrices, $AB-BA=A^2$, and let the trace of $B$ be zero. Show that $B^2=0$. Clearly, the trace of $A^2$ and $B$ are both $0$. How do we proceed from here? |
What does population proportion mean in statistics? Posted: 18 Jul 2021 07:11 PM PDT My professor keeps saying that the population propertions between 3 different species of plants whose lengths are greater than 3 inches isn't the same? What does that mean? |
Reformulation :why diferential operators can be basis Posted: 18 Jul 2021 07:38 PM PDT I do not know if the image appeared, but, What is the explanation for this be a basis, In case it didn't show up I'm saying the partial derivatives. |
Problem with Bertrand's ballot theorem Posted: 18 Jul 2021 07:36 PM PDT I'm currently solving a similar problem to this one: Cashier has no change... catalan numbers.. probability question. In my case, you have an equal number of "votes/steps" for A and an equal number of "votes/steps" for B. The question is: given $2n$ steps, where there are $n$ "-1" steps and $n$ "+1" steps, what's the probability you always remain above 0. The solution is $\frac{1}{n+1}$, which I understand. The issue is my approach. I applied conditional probability: P(X) = P(X | last element = -1)*P(last element = -1) + P(X | last element = 1) * P(last element = 1). But P(last element = -1) = P(last element = 1) = 1/2. Also, P(X|last element = 1) = 0 because if the last appearance is a +1 and the final result sums to 0, the 2nd to last result must be -1. Hence, P(X) = 1/2*P(X | last element = -1). With this form, we have n-1 "-1" values and n "+1" values, allowing us to use Bertrand's theorem here https://en.wikipedia.org/wiki/Bertrand%27s_ballot_theorem#Bertrand's_and_Andr%C3%A9's_proofs which states that "Suppose that candidates A and B are in an election. A receives a votes and B receives b votes, with a > b. The probability that A is always ahead of B is "$(a-b)/(a+b) * {a+b \choose a}$" So the end result I get is $\frac{n - (n-1)}{(n+n-1)} * {2n-1 \choose n} = \frac{1}{2n-1}*{2n \choose n}*1/2$. Then dividing by the total sample space, we get $P(X) = \frac{1}{4n-2}$. Where did I go wrong? |
Binomial distribution probability of defective product is 0.01 Posted: 18 Jul 2021 07:05 PM PDT Q. The probability that a product is defective is 0.01. The products are packed in boxes; each box contains 10 products. A company orders a consignment of 12 boxes of products. A purchaser randomly opens three boxes, and accepts the consignment if the 3 boxes altogether contain at most one defective product. Calculate the probability that the purchaser rejects the consignment. Round your answer to 3 decimal places. My Sol: I understand the distribution is binomial but I am not sure of the full solution for this. I am thinking that n=10 (each box contains 10 products) and p=0.99 (non-defective). But I am not sure how to bring in the 3 out of the 12 boxes to find the probability...any help is appreciated. Thank you. |
Minimize a quadratic form over $\mathbb{Z}$, where the matrix is positive-definite and symmetric Posted: 18 Jul 2021 07:33 PM PDT I'm currently dealing with some quadratic forms over $\mathbb{Z}$ like this one: $$D(x_1,x_2)=(a^2-2b)(x_1^2+x_2^2)+4b(x_1x_2),$$ where $a,b\in\mathbb{R}$ and $a^2-2b>0$. Let us assume that this is always $>0$, unless $x_1=x_2=0$, in which case, $D(x_1,x_2)=0$. I know I can translate $D$ to $x^TAx$, where $$A=\begin{pmatrix}a^2-2b & 2b\\ 2b & a^2-2b\end{pmatrix}.$$ Notably, the matrix is symmetric. Now, it really seems like the minimum of this form is always achieved in some $(x_1,x_2)$ with $\Vert x_i\Vert \leq 1, \ \forall i=1,2$. In other words, every time I compute $D(x_1,x_2)$ with $\Vert x_i\Vert > 1$ for some $i$, the result seems to be always larger than $D(1,0)$ or $D(-1,1)$ or $D(1,1)$. It is definitely doable to prove that by brute force, but is there any result that makes this immediate? For sometimes I encounter similar quadratic forms but with more variables, and the result seems to hold still. The results regarding minimizing quadratic forms I know always involve the form being over $\mathbb{R}$, but the form in this case is over $\mathbb{Z}$, which should result, I think, in a less difficult optimization. But is there any result that says something regarding the minimum ($\neq 0$) of a quadratic form over $\mathbb{Z}$ when the matrix is symmetric and positive-definite? I can't find something of this sort anywhere. Please let me know if my question is inapropriate in some way. |
Posted: 18 Jul 2021 07:16 PM PDT WMMW is possible, what I want to prevent is a single man between two women, WMW. What I am trying to do is: 1 - The three women together and the group of men in any position $= 3!(WWW) \cdot 3!(MMM) \cdot 2!$(In two spaces, to guarantee WWWMMM and MMMWWW being included). 2 - Two women next to each other, two men, a women and a men together. $= 2! \cdot 1! \cdot 1! \cdot 2!$ (Maybe this is wrong, what I want to calculate are the cases related where the explained above applies, for example, WWMMWM, WMWWMM, MWMMWW) I'm not sure if neither I have to add both results, nor if my approach is correct. |
Is the derivative of the conditional expectation of Y on X equal to the least squares projection? Posted: 18 Jul 2021 07:06 PM PDT Fix a probability space $(\Omega, \mathcal{F}, P)$. Let $Y,X$ be random variables from $\Omega \rightarrow \mathbb{R}$. Let $m(x): \mathbb{R} \rightarrow \mathbb{R}$ be defined by $\mathbb{E}[Y|X=x]$. For readers not satisfied with this definition of $m(x)$, I justify it at the end of the post. Suppose the derivative $m'(x)\vert_{x=\mathbb{E}[X]}$ exists. Is it true that $m'(x)\vert_{x=\mathbb{E}[X]}$ equals the least squares projection coefficient $$ \beta \equiv \frac{\mathbb{E}\left[\left( X-E[X] \right) \left( Y - E[Y] \right) \right]} {\mathbb{E}[X^2]}$$ of $Y$ on $X$? I conjecture this since by a previous theorem, $\beta$ is the coefficient on $X$ in the best affine approximation to $m(X)$ w.r.t. mean squared error. And can we say anything about higher order derivatives of $m(x)$ at $x=\mathbb{E}[X]$? More on the definition of m(x):Let $Z=\mathbb{E}[Y\vert \sigma(X)]$ be a conditional expectation of $Y$ given $\sigma(X)$. Thus $Z$ is a random variable from $\Omega \rightarrow \mathbb{R}$ measurable w.r.t. $\sigma(X)$, the sigma algebra generated by $X$; I write this as $Z \in \sigma(X)$. If I'm not mistaken, since $Z\in \sigma(X)$, the Doob-Dynkin Factorization Lemma implies that there exists a Borel-measurable function $g: \mathbb{R} \rightarrow \mathbb{R}$ such that $Z = g \, \circ X$. We then define $m(x)=g(x)$. $m(x)$ is not necessarily unique since the conditional expectation $Z$ is only unique up to measure zero, but I suppose this means $m(x)$ is essentially unique in some sense (not quite sure). Thank you. |
Posted: 18 Jul 2021 07:40 PM PDT Hansol and Kathy are bored in Mr. Frazer's class so Hansol takes out her analog watch and shows it to Kathy. The time on the watch reads (correctly) $12:05$ . The hour hand of Mr. Frazer's clock also has interesting habit of only moving when the hour changes. Hansol then asks Kathy to state the acute angle measured by the hands of the analog watch. Kathy, being not so good at math, accidently states $x$, the complementary angle of the correct answer, in radians. What is the value of $\sin(x)$? The answer is supposed to be C, as that is the $\sin(\pi/3)$. However, this question was listed as having the answer of E.) NOTA, and I'm having trouble understanding why? Edit: My best guess is, the angle was originally $30$ deg ($\pi/6$), but she read $60$ ($\pi/3$). But she might have read it as $60$ radians rather than $\pi/3$ radians. |
Are all real and imaginary numbers complex numbers? Posted: 18 Jul 2021 07:22 PM PDT This image makes it seem like real and imaginary numbers are all complex numbers. I thought complex numbers were numbers composed of a real and imaginary part. This got me thinking, and I think I might see it now. All real numbers can be written as $$r \times i^{4k}$$ and all imaginary numbers can be written as $$i^k \times 1$$ or $$i^k + 0$$ There are other ways to combine them in ways that does nothing to the value. So, is it these combinations that make all real and imaginary numbers into complex numbers? |
How do I determine the lattice points of the convex hull of a set of {0,1}^n points? Posted: 18 Jul 2021 07:32 PM PDT Given a set $S$ of $\{0, 1\}^{n}$ points - that is, $n$-length points where each entry is one of $0$ or $1$ - I am interested in the lattice points of its convex hull, $conv(S)$. It seems pretty obvious to me that the set of lattice points of $conv(S)$ is exactly $S$ itself, but I don't quite know how to go about proving it. For clarification, by "lattice point", I mean a point where every entry is an integer. |
Bachelier model option pricing Posted: 18 Jul 2021 07:06 PM PDT Consider a Brownian motion $W_t$ and Bachelier model $S_t = 1 + µt + W_t$. Find the value of an option that pays $1(S_1 > 1) · 1(S_2 < 1)$. As I understand it, the answer is basically $P(S_1 > 1 \text{ and } S_2 < 1)$, which yields a big integral with parameter $\mu$. I suspect there should be a simple answer. Update: Kurt and Maximilian, thank you for your help. However, I don't believe the answers given are correct. As I understand it now, you are supposed to use risk neutral valuation argument by constructing delta-hedging portfolio, not evaluate the probability directly. Interestingly, in Black-Scholes or Bachelier models, when evaluating options with risk-neutral valuation, the result usually doesn't depend on drift term µ. |
Posted: 18 Jul 2021 06:48 PM PDT Let us consider a field $K$ of characteristic $0$. Then we know that any finite extension $L$ of $K$, which is a Galois extension as well, is produced the roots of a separable polynomial $f(x) \in K[x]$. This is know as Galois theory for one variable polynomial. My question: My question is about multivariable Galois theory i.e., Galois extension of $K$ generated by the roots of polynomials $f(x_1,x_2, \cdots, x_n) \in K[x_1,x_2, \cdots, x_n]$. $(1)$ Can the roots of some $2$-variable polynomial $f(x_1,x_2) \in K[x_1,x_2]$ generate a Galois extension of $K$ ? The following is more elaborated question. $(2)$ Consider a $2$-tuple polynomial $f(X)=(f_1(x_1,x_2),f_2(x_1,x_2)) \in K[x_1,x_2]^2$, where $X=(x_1,x_2)$ is a $2$-tuple. Then the zeros of $f(X)$ are given by $f_1(x_1,x_2)=f_2(x_1,x_2)=0$ i.e., the common solutions. Let us define $$S=\{(\alpha_1,\alpha_2) \in L^2: ~f_1(\alpha_1,\alpha_2)=0=f_2(\alpha_1,\alpha) \},$$ $\text{where $L$ is some finite extension of $K$}$ From this post, I know that the set $S$ of solution can be algebraic and thus $L$ might be a Galois extension of $K$. However I am confused about the matter that ''the elements of $K$ are just scalars i.e., $1$-tuple while the elements in $S$ are like coordinates or $2$-tuple vector. So how does the solutions $f_1(x_1,x_2)=f_2(x_1,x_2)=0$ generate an algebraic extension of $K$ ? Example: As In my second question, suppose I consider $f(X)=(f_1(X),f_2(X)) \in K[X] \times K[x]$ by $f(x,y)=(x^2+y^2-2,x^2-2y^2-1)$, where $X=(x,y)$. Then zeros of $f(X)$ are given by $x^2+y^2-2=0=x^2-2y^2-1$ and those are $(\pm \sqrt{\frac{5}{3}}, \pm \frac{1}{\sqrt{3}})$. In this case, how would we describe the extension ? Is it like $K(\sqrt{\frac{5}{3}},-\sqrt{\frac{5}{3}}, \sqrt{\frac{1}{3}},-\sqrt{\frac{1}{3}}) \cong K(\sqrt{\frac{5}{3}})$ or like $K\left((\sqrt{\frac{5}{3}}, \sqrt{\frac{1}{3}}), (-\sqrt{\frac{5}{3}},\sqrt{\frac{1}{3}}), (\sqrt{\frac{5}{3}},-\sqrt{\frac{1}{3}}), (-\sqrt{\frac{5}{3}},-\sqrt{\frac{1}{3}})\right)$ ? Clearly the first case is degree one extension. What about 2nd case ? Any discussion please. |
About the inequality $x^{x^{x^{x^{x^x}}}} \ge \frac12 x^2 + \frac12$ Posted: 18 Jul 2021 07:25 PM PDT
Remark 1: The problem was posted on MSE (now closed). Remark 2: I have a proof (see below). My proof is not nice. For example, we need to prove that $\frac{3x^2 - 3}{x^2 + 4x + 1} + \frac{12}{7} - \frac{24x^{17/12}}{7x^2 + 7} \le 0$ for all $0 < x < 1$ for which my proof is not nice. I want to know if there are some nice proofs. Also, I want my proof reviewed for its correctness. Any comments and solutions are welcome and appreciated. My proof (sketch): We split into cases: i) $x \ge 1$: Clearly, $x^{x^{x^{x^{x^x}}}}\ge x^x$. By Bernoulli's inequality, we have $x^x = (1 + (x - 1))^x \ge 1 + (x - 1)x = x^2 - x + 1 \ge \frac12 x^2 + \frac12$. The inequality is true. ii) $0 < x < 1$: It suffices to prove that $$x^{x^{x^{x^x}}}\ln x \ge \ln \frac{x^2 + 1}{2}$$ or $$x^{x^{x^{x^x}}} \le \frac{\ln \frac{x^2 + 1}{2}}{\ln x}$$ or $$x^{x^{x^x}}\ln x \le \ln \frac{\ln \frac{x^2 + 1}{2}}{\ln x}$$ or $$x^{x^{x^x}}\ge \frac{1}{\ln x}\ln \frac{\ln \frac{x^2 + 1}{2}}{\ln x}.$$ It suffices to prove that $$x^{x^{x^x}}\ge \frac{7}{12} \ge \frac{1}{\ln x}\ln \frac{\ln \frac{x^2 + 1}{2}}{\ln x}. \tag{1}$$ First, it is easy to prove that $$x^x \ge \mathrm{e}^{-1/\mathrm{e}} \ge \frac{1}{\ln x}\ln\frac{\ln\frac{7}{12}}{\ln x}.$$ Thus, the left inequality in (1) is true. Second, let $f(x) = x^{7/12}\ln x - \ln \frac{x^2 + 1}{2}$. We have \begin{align*} f'(x) &= \frac{7}{12x^{5/12}} \left(\ln x + \frac{12}{7} - \frac{24x^{17/12}}{7x^2 + 7}\right)\\ &\le \frac{7}{12x^{5/12}} \left(\frac{3x^2 - 3}{x^2 + 4x + 1} + \frac{12}{7} - \frac{24x^{17/12}}{7x^2 + 7}\right)\\ &\le 0 \tag{2} \end{align*} where we have used $\ln x \le \frac{3x^2 - 3}{x^2 + 4x + 1}$ for all $x$ in $(0, 1]$. Also, $f(1) = 0$. Thus, $f(x) \ge 0$ for all $x$ in $(0, 1)$. Thus, the right inequality in (1) is true. We are done. |
How was the Ornstein–Uhlenbeck process originally constructed? Posted: 18 Jul 2021 06:57 PM PDT It is natural to wonder about a Brownian motion with a drift toward $0$ whose rate is equal to the current value of the process. Unlike the standard Wiener process, which is null-recurrent, this is positive-recurrent and seldom wanders very far from $0.$ But the way I have seen it presented is this:
Would a reasonable person think something like that is of interest other than as a routine exercise for undergraduates unless they knew the (to me) unexpected result? What sort of thought process could possibly lead someone from wondering about the mean-reverting Brownian motion to coming up with line $(1)$ above as the answer? |
Continuous mapping from matrices to eigenvalues Posted: 18 Jul 2021 07:36 PM PDT Let $n\in\mathbb{N}$, and let $M_n(\mathbb{C})$ be the set of all $n\times n$ matrices whose entries are taken from $\mathbb{C}$. Now, is there a continuous function $f:M_n(\mathbb{C})\to\mathbb{C}^n$ that maps every matrix $A$ exactly to its eigenvalues $\lambda_1,...,\lambda_n$? I don't know how to proceed. If there exists such a function, then how can I prove its continuity ? Thanks in Advance. |
How to compute the eigenvalue condition number of a matrix Posted: 18 Jul 2021 07:04 PM PDT How to compute the eigenvalue condition number, $\kappa(4,A)$, of a matrix $A$ $$A = \begin{bmatrix} 4 & 0 \\ 1000 & 2\end{bmatrix}$$ I am a bit stuck on how to proceed solving this problem I know that the eigenvalue condition number equation is $κ(λ, A) = \|y\|\|x\|$. But other than that, I am lost. |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment