Recent Questions - Mathematics Stack Exchange |
- Conditional expectation of symmetric random walk
- Finding a line with mnimum distance among a set of points
- If we want $x^n$ to have an inverse for all non-zero integers $n$, what must the domain of $x$ be?
- fully faithful of the direct image, but on $\text{Ext}^1$
- Proof that $N(t,A):=\left|\left\{s\in(0,t]:\Delta X(s)\in A\right\}\right|$ has independent increments
- is there an equation to find how many triangles exist in this?
- Is there a combinatorial proof that Euler's totient function divides Jordan's totient function?
- Baby Rudin 1.6(d) - Proof verification
- Let $ T(A) = A^t $ be linear transformation with $ \langle A, B \rangle = tr(A^t B) $. Then $ T^* = T $
- probability winning game with changing winning conditions
- First order logic - proof outside of the deduction system
- Bayesian Posterior Confidence Interval
- How to find the gradient of matrix multiplying hadamard product
- $(1+x)^a$ McLaurin series
- Regarding Enderton’s group axioms
- Quick proof clarification: Show 1 dimensional manifold is homeomorphic to $\mathbb{R}$ or a circle
- given two basis and the matrix, how to find the linear transformation?
- Find the standard deviation based on the mean and deviating sample
- Given $\partial_vg(2,0),$ compute $\partial_vf(2,0),$ where $f=g_1^2+\sin(g_1+g_2)$
- Trigonometric Function variables
- Rank of a tall binary matrix with IID Bernoulli entries with probability function of size.
- Do $u\in H^1(\Omega)$ and $\Delta u\in L^2(\Omega)$ imply $u\in H^2(\Omega)$?
- Möbius transformation with infinity in both the $w$-plane and $z$-plane.
- Is this object a simpler Brunnian "rubberband" loop than those studied?
- Where's my mistake (parametrized surface integral)?
- $a,b>0$ then: $\frac{1}{a^2}+b^2\ge\sqrt{2(\frac{1}{a^2}+a^2)}(b-a+1)$
- Does sticking LEGO bricks together make them easier or harder to pack away into a box of fixed volume?
- What is the probability that the best candidate was hired?
- How to Find the null space and range of the orthogonal projection of $\mathbb{R}^3$ on a plane
- Let $S_n$ be a Simple Random Walk. What is $E[S_m|S_n]$ if $m < n$?
Conditional expectation of symmetric random walk Posted: 22 Dec 2021 01:31 PM PST Question: We have a symmetric random walk on the integers starting at n=1. Let L be the first time the walk first hits 0. Let A be the event that the walk hits 0 before it hits 3. What is E(L|A)? I am really not sure how to deal with this conditional expectation or how to even start this question. I have tried to use the definition of conditional expectation but I just haven't gotten anywhere. I'm not sure if I'm missing something obvious so I'd appreciate any hints. |
Finding a line with mnimum distance among a set of points Posted: 22 Dec 2021 01:29 PM PST Suppose I have a set of points denoted by $a_i$ I want to minimize this model: $$\sum_i (w_i |(x-a_i)|1(x-a_i)+v_i|x-a_i|1(a_i-x))$$ where 1(x) denotes an indicator function which takes value 1 if x>0 and 0, otherwise. I believe this this optimization model should have a close form solution. In fact, the solution represents a line that minimizes the weighted distance between this line (i.e., y=x) and a set of points. |
If we want $x^n$ to have an inverse for all non-zero integers $n$, what must the domain of $x$ be? Posted: 22 Dec 2021 01:26 PM PST Background Let $f(x) = x^n$ with $n\in\mathbb Z$ and $x \in \mathbb R$. For $f$ to have an inverse, $n$ must be odd. Question Can we change the domain of $x$ such that $x^n$ has an inverse for all integers $n$ except $0$? |
fully faithful of the direct image, but on $\text{Ext}^1$ Posted: 22 Dec 2021 01:07 PM PST Let $i:C\subset S$ where $S$ a smooth projective complex surface and $C$ a smooth projective complex curve, then the functor $i_*:\textbf{Coh}(C)\rightarrow\textbf{Coh}(S)$ is fully faithful, i.e. $$\text{Hom}_C(\mathcal{E},\mathcal{F})\cong\text{Hom}_S(i_*\mathcal{E},i_*\mathcal{F})$$ Can we show that $\text{Ext}^1_C(\mathcal{E},\mathcal{F})\cong\text{Ext}^1_C(i_*\mathcal{E},i_*\mathcal{F})$ in general or with some additional (but not vary special) conditions? If not, can we show it for special examples such as $C\cong\mathbb{P}^1$ and $\mathcal{E}=\mathcal{F}=\mathcal{O}_C(m)$ or even $\mathcal{O}_C$? |
Posted: 22 Dec 2021 12:58 PM PST Let $E$ be a normed $\mathbb R$-vector space, $(X(t))_{t\ge0}$ be an $E$-valued càdlàg Lévy process on a probability space $(\Omega,\mathcal A,\operatorname P)$ and$^1$ $$N(t,A):=\left|\left\{s\in(0,t]:\Delta X(s)\in A\right\}\right|\;\;\;\text{for }t\ge0$$ for some $A\in\mathcal B(E)$ with $0\not\in\overline A$.
Regarding (2.3): Since $0\not\in\overline A$, $$r:=\operatorname{dist}(0,A)>0$$ and hence $$\left|\left\{t\in[a,b]:\Delta x(t)\in A\right\}\right|\le\left|\left\{t\in[a,b]:\left\|\Delta x(t)\right\|_E\ge r\right\}\right|\tag1$$ is finite for all $a,b\in\mathbb R$ and every càdlàg $x:[0,\infty)\to E$. But this only yields the existence of the $t_i$ (as claimed in the proof) which depend on $\omega$ or am I missing something? Regarding (2.4): I actually don't get this. If $x:[0,\infty)\to E$ has a left-limit $t\ge0$, then it clearly holds that for every $\varepsilon>0$, there is a $\delta>0$ with $$\left\|x(r)-x(t)+\Delta x(t)\right\|_E=\left\|x(r)-x(t-)\right\|_E<\varepsilon\tag2$$ for all $t\in(t-\delta,t)$. But this doesn't yield the "only if" part of the claim in the proof, since we cannot choose $a=\Delta X(\omega,t)$ (for some fixed $\omega)$, since we only know (by assumption) that $\Delta X(\omega,t)\in A$, which doesn't necessarily imply that $-a\in A$. And the "if" part isn't clear to me at all. How can we show these things rigorously? As usual, $x(t-):=\lim_{s\to t-}x(s)$ and $\Delta x(t):=x(t)-x(t-)$ for every $x:[0,\infty)\to E$ with a left-limit at $t\ge0$. |
is there an equation to find how many triangles exist in this? Posted: 22 Dec 2021 12:58 PM PST In the image above which i horribly draw i am showing first a circle. The following "shapes" have small circles which represent points. All those points are positioned on the perimeter of that circle. Each and every point is directly connected with a straight line with all the other points. My questions are:
I hope you understand my questions and sorry for that image. I have searched online but got nothing. Bare in mind that i have elementary math knowledge and am not so "bright" :D. Thank you in advance! |
Is there a combinatorial proof that Euler's totient function divides Jordan's totient function? Posted: 22 Dec 2021 01:29 PM PST Jordan's totient function $J_{k}(n)$ is a generalization of Euler's totient function that counts the number of $k$-tuples $(a_1, \ldots, a_k)$ for which $1 \leq a_1, \ldots, a_n \leq n$ and $gcd(a_1, \ldots, a_k, n) = 1$, where $n$ and $k$ are positive integers. There is an explicit formula for this function as follows: $$J_{k}(n) = n^k \prod_{p} \left(1 - \frac{1}{p^k}\right),$$ where $p$ ranges over through the prime divisors of $n$. Using this identity, it is pretty straightforward to deduce that $J_{1}(n)$, which is Euler's totient function, divides $J_{k}(n)$ for all positive integers $k$ and $n$. However, given the simplicity of the definitions, I believe there is a much more elegant combinatorial proof of this fact that does not rely on the explicit formula. I would be glad if anyone could provide a combinatorial proof or trick that I'm unable to see. |
Baby Rudin 1.6(d) - Proof verification Posted: 22 Dec 2021 12:58 PM PST I have seen several solutions of Exercise 1.6(d) in Baby Rudin, but many seem much more complicated than mine, which leads me to suspect I have made a mistake somewhere. Hence I would appreciate verification of my solution/proof. The exercise formulation:
My solution (for 6(d))Let $b^t \in B(x+y)$. Then $t \in Q$ and $t \le x + y$. By Theorem 1.20(b) ($Q$ dense in $R$) there exists $r \in Q$ such that $t - y \le r \le x$. Let $s = t - r \le t - (t-y) = y$. Then $$b^t = b^{r+s} = b^r b^s \le b^x b^y.$$ Hence $b^x b^y$ is an upper bound of $B(x+y)$, and $b^x b^y \ge b^{x+y}$. Using the lemma, $$b^x b^y = \sup B(x) \cdot \sup B(y) = \sup(B(x)B(y)).$$ But $B(x)B(y) = \{b^u b^v | u \in Q, v \in Q, u \le x, v \le y\}$ is clearly bounded above by $\sup B(x+y)$, so $$b^x b^y \le \sup B(x+y) = b^{x+y}.$$ LemmaLet $A$ and $B$ be sets of positive real numbers. Then $$\sup A \cdot \sup B = \sup AB,$$ where $AB$ is the set of all products $ab$ such that $a \in A$ and $b \in B$. ProofLet $\alpha = \sup A$, $\beta = \sup B$, and $\gamma = \sup AB$. For arbitrary $a \in A$, $b \in B$ we have $ab \le \alpha \beta$, so $\alpha \beta$ is an upper bound of $AB$, and hence $\alpha \beta \ge \gamma$. Moreover, $a \le \gamma/b$, and since $a \in A$ was arbitrary, $\gamma/b$ is an upper bound of $A$, implying $\gamma/b \ge \alpha$, or equivalently $\gamma/\alpha \ge b$. But $b \in B$ was also arbitrary, so we obtain in the same way $\gamma/\alpha \ge \beta$, and this concludes the proof. |
Posted: 22 Dec 2021 12:48 PM PST Theorem: Let $ V = M_n(\mathbb{R}) $ with inner-product $ \langle A, B \rangle = tr(A^t B) $. Let $ T: V \to V $ be a linear transformation defined as $ A \mapsto A^t $ for all $ A \in M_n(\mathbb{R}) $. Thus $ T^* = T $ ( the conjugate transformation is equal to the transformation ). Proof attempt: ( I've used the facts that $ tr(AB) = tr(BA) $ and $ tr(A) = tr(A^t) $ for square matrices ). Note that $ \langle T(A), B \rangle = \langle A, T^* B \rangle = \langle A, T(B) \rangle $ so that $ \langle A,T^*(B) - T(B) \rangle = 0 $ for all $ A,B \in M_n(\mathbb{R}) $ Question: How do I continue from where I've stopped to show that $ T^* = T $? I have no clue. Thanks in advance for help! |
probability winning game with changing winning conditions Posted: 22 Dec 2021 12:48 PM PST I have the following problem: Suppose I have a 10 level game. You start at level 0, your direction is upwards (this will be of importance later). At each timestep you can ascend or descend a Level. The probability of you going into the direction you are currently heading is T. The probability of you going into the opposite direction you are heading is (1-T), in this case you swap directions. You win if you reach level 10, game over when you "reach level -1". I am now interested in distribution of the players winning the game by timx x: How many of k players do win the game in 10 steps of time, how many in 11, in 42, ... I thank everyone of you tries to help me! Cheers Rucki |
First order logic - proof outside of the deduction system Posted: 22 Dec 2021 12:50 PM PST I have now started studying first order logic and I find the Fitch Style Proofs very intuitive and mechanic. But then I encountered these two questions and although I know they're true, I have no idea how to go about proving them. Consider $\Sigma$ a set of formulas of a first order language, and we know that $x$ does not occur in any of the formulas of $\Sigma$. We have then to prove that if $\Sigma \cup \{ \phi \} \vdash \psi$ and $x$ does not occur free in $\psi$, then $\Sigma \cup \{\exists x \phi \} ⊢ \psi$. We also have to prove that if $\Sigma \cup \{ \phi \} \vdash \psi$ then $\Sigma \cup \{ \exists x \phi \} \vdash \exists x \psi$. I would like to know how this kind of proof works, and if anyone has any idea where to find similar exercises I would be grateful. Thank you |
Bayesian Posterior Confidence Interval Posted: 22 Dec 2021 12:50 PM PST Let Exp(λ) denote the exponential distribution with mean 1/λ. Suppose X1, . . . , Xn ∼ Exp(λ) are i.i.d., and we are interested in Bayesian inference of the parameter λ > 0. Let the prior distribution for λ be Exp(γ) where γ > 0 is a given constant. Suppose we observe x^n = (x1, . . . , xn) = (X1(ω), . . . , , Xn(ω)). Fix α ∈ (0,1). By considering a quadratic Taylor expansion Q(θ) of logf(θ|x^n) about the MAP estimator θ and replacing f(θ|x^n) ≈ e^Q(θ),compute an approximate (1 − α) Bayesian posterior confidence interval for θ. So far I calculated the posterior f(θn|x^n) = λ^n exp(-λ Σ xi) (exp(γ)) / ∫λ^n exp(-λ Σ xi) (exp(γ))dλ I also calculated that MAP = argmax(log(λ^n exp(-λ Σ xi)+log(exp(γ)) Im not sure if this is correct but this is as far as I have gotten and I have no idea how to relate this to what the question is asking. Any advice would be appreciated. |
How to find the gradient of matrix multiplying hadamard product Posted: 22 Dec 2021 01:30 PM PST I'm trying to find the gradient of A(x∘x) with respect to x, where ∘ is the Hadamard product and A is a matrix with positive real values. Thanks in advance! |
Posted: 22 Dec 2021 01:25 PM PST My book has a small section about the McLaurin series of $f(x) = (1+x)^a$. The number $a$ is a real constant (could be any number). Now... We would like to know for which values of $x$ and $a$ it is true that: $$(1+x)^a = 1 + \dbinom{a}{1}x + \dbinom{a}{2}x^2 + \dbinom{a}{3}x^3 + ... \tag{1}$$ ( i.e. that this series is convergent and its sum is the LHS ). Here we use the generalized binomial coefficient notation: $$\dbinom{a}{n} = \frac{a(a-1)\dots(a-(n-1))}{n!}$$
I got confused with the boundary cases.
|
Regarding Enderton’s group axioms Posted: 22 Dec 2021 12:46 PM PST In his A Mathematical Introduction to Logic Enderton writes on page 93 (2nd edition): "… The class of all groups is an elementary class, being the class of all models of the conjunction of the group axioms: ∀ x ∀ y ∀ z (x ◦ y) ◦ z = x ◦ (y ◦ z); I can't deduce the existence of unique identity element from the above axioms. Am I missing something? |
Quick proof clarification: Show 1 dimensional manifold is homeomorphic to $\mathbb{R}$ or a circle Posted: 22 Dec 2021 01:00 PM PST I am trying to see it in the case where we just have to open sets $U,V$ that cover the connected manifold $M$, each of which is homeomorphic to the real line. I know that the answer depends on the number of connected components of the intersection. I just really need an aclaration with a proof for the case where we have two (in which case is homemorphic to a circle) connected components. I am following this nice proof. And the part I am stucked with is specifically in proposition 2, part c. How could we prove the map is an homeomorphisms? And why again is the union of $U$ and $V$ compact? Thanks a lot. Edit: I am thinking about how just because $h_1=f \circ \phi$ and $h_2=g \circ \psi$ are homeomorphisms in their respective images, that they dont intersect and cover the whole square, then doesn't follow that the function $\eta$ is an homeomorphism by the pasting lemma ? |
given two basis and the matrix, how to find the linear transformation? Posted: 22 Dec 2021 01:01 PM PST let $f$ be a function defined from $R3$ to $R2$, with matrix equal to $$ A = \begin{bmatrix}2&0&-1\\0&-1&1\end{bmatrix}$$ with respect to these two basis: $$ B = ((1, -1, 0), (0, 1, 0), (0, 0, 2))$$ and $$ B' = ((0, 2), (-1, 0)) $$ I need to find the linear transformation. for example, if I have $$ R3 -> R2 $$ defined as $$ f: (x, y, z) -> (2x+y, z-y) $$ I need to find this part $((2x+y, z-y))$, but in the other exercise. I have no clue how to do it, I don't even know how to start. I know that if I have this part $((2x+y, z-y))$, it's pretty easy to find the matrix, just find the image of each vector of the basis of R3, and then use the knowledge of span to find the scalars (i.e the entry of the matrix). but I don't know how to do the reverse, from matrix to linear transformation. I hope this question is clear, if it's not, let me know and I'll edit it. |
Find the standard deviation based on the mean and deviating sample Posted: 22 Dec 2021 01:08 PM PST For $1000$ gnomes, $390$ of them were found to deviate from the arithmetic mean in height by no more than $1.4$ inches ($\overline{x} = 57.3$ inches). Can an approximate value for the standard deviation be determined from this data if a normal distribution is assumed? Attempt We know that $39$ percent of the gnomes do deviate from the arithmetic mean. Then, find a corresponding $z$-score, which is $z = \pm1.23$. Since I don't know the exact sample values, I don't know how to proceed. |
Given $\partial_vg(2,0),$ compute $\partial_vf(2,0),$ where $f=g_1^2+\sin(g_1+g_2)$ Posted: 22 Dec 2021 12:49 PM PST
My attempt: First, I noticed $\|v\|=\sqrt{\left(\frac12\right)^2+\left(-\frac{\sqrt 3}2\right)^2}=1$ and $g(2,0)=(\pi,0)\implies g_1(2,0)=\pi,g_2(2,0)=0.$ Therefore we can use the following result:
Also, we can apply the following lemma:
This tells us both $g_1$ and $g_2$ are differentiable. Let $h:\Bbb R\to\Bbb R, h:t\mapsto t^2$ and $k:\Bbb R^2\to\Bbb R, k=g_1+g_2$. Then, $r=h\circ g_1$ and $s=\sin\circ k$ are both differentiable as compositions of differentiable functions, hence $f$ is also differentiable at $(2,0)$ and $$\begin{aligned}Df(2,0)&=Dh(g_1(2,0))Dg_1(2,0)+D\sin((g_1+g_2)(2,0))D(g_1+g_2)(2,0)\\&=2g_1(2,0)Dg_1(2,0)+\cos((g_1+g_2)(2,0))D(g_1+g_2)(2,0)\\&=2\pi Dg_1(2,0)-D(g_1+g_2)(2,0)\end{aligned}$$ In terms of matrices, $\operatorname{grad}g_i(2,0)=e_i^T\nabla g(2,0)$ and $\operatorname{grad}(g_1+g_2)(2,0)=(1,1)\nabla g(2,0).$ Finally, $\begin{aligned}\partial_vf(2,0)&=\langle\operatorname{grad} f(2,0),v\rangle\\&=2\pi(1,0)\nabla g(2,0)v-(1,1)\nabla g(2,0)v\\&=2\pi(1,0)\partial_vg(2,0)-(1,1)\partial_vg(2,0)\\&=(2\pi,0)\cdot\begin{pmatrix}1\\-1\end{pmatrix}-(1,1)\begin{pmatrix}1\\-1\end{pmatrix}\\&=2\pi.\end{aligned}$ Can somebody verify my answer? |
Trigonometric Function variables Posted: 22 Dec 2021 01:30 PM PST I'm currently reading Precalculus with limits and got into chapter 4 of Trigonometry. I now understood that, an angle $u$ is a real number that correspond to the points $(x,y)$ in the unit circle. So we have now two functions which is $x=\cos u$ and $y=\sin u$ according to the definitions of the right triangle (trigonometric ratios) and unit circle. My dilemma is, since we have now defined cosine as a function of $x$, so we could call it a function $x=g(u)=\cos u$, when I make a graph, I choose $x$ as vertical axis and $u$ as horizontal axis, because cosine move from right to left in a unit circle. But in the book they choose cosine as $y=\cos x$. But $y$ is already taken as a definition of sine function, $y=f(u)=\sin u$. So how did they interchanged the variables here? This matter is very confusing to me. |
Rank of a tall binary matrix with IID Bernoulli entries with probability function of size. Posted: 22 Dec 2021 01:12 PM PST I have a binary matrix composed of $0's$ and $1's$. $\textbf{X}$ of size $(n,k)$ where $n>k$. The entries of the matrix are Bernoulli random variables. Specifically, the $(i,j)^{th}$ entry $X_{ij} \sim Bern(q), \forall i,j.$ Also, $X_{ij} \in \{0,1\}$. Note that $X_{ij}$ are independent variables $\forall i,j.$ I am operating on the field $\mathbb{R}$. Question: $\boxed{\text{What is the probability that Rank(X)=$k$ if $q=1-2^{-1/k}, a>0$ is a constant.}}$ Moreover, in my case, $\boxed{n= C \times k \log k \text{ where C>0 is a constant such that $n>k$.}} $ Can someone provide a characterization of $Rank(\textbf{X})$ in terms of $q$ and $n$. Are there results in the literature which talks about "tall Bernoulli matrices" with probability as a function of the matrix dimension? Tight bounds are ok too. |
Do $u\in H^1(\Omega)$ and $\Delta u\in L^2(\Omega)$ imply $u\in H^2(\Omega)$? Posted: 22 Dec 2021 01:02 PM PST
Here we say $\Delta u\in L^2(\Omega)$ by meaning $u$ has weak derivatives $\partial_{x_i}^2 u$ for $i=1,\cdots,n$ and $$ \Delta u:=\sum_{i=1}^n \partial_{x_i}^2u\in L^2(\Omega). $$ There is a related problem, and some comments refer to Theorem 8.12 in Gilbarg and Trudinger's book. I think their theorems are essentially proving the following result:
The proof can be accomplished using the Lax-Milgram theorem and the regularity of the weak solution. However, the original problem does not assume $u$ has zero trace, and this result cannot be directly applied. In Helffer's Spectral Theory and Its Applications, Equation (4.4.5) at Page 38 defines the space $$ W(\Omega) = \{u\in H^1(\Omega):-\Delta u\in L^2(\Omega)\} $$ and claims that there is NO $W(\Omega)\subset H^2(\Omega)$. It seems that the answer to the original problem is negative, but I am not sure where to find a counterexample. Any helpful comments or answers are appreciated. |
Möbius transformation with infinity in both the $w$-plane and $z$-plane. Posted: 22 Dec 2021 01:08 PM PST I want to find the Möbius transformation which brings $f(0)=\infty$, $ f(\infty)=0$, and $f(5)=i$. If I use the formula \begin{equation} \frac{(w-w_1)(w_2-w_3)}{(w-w_3)(w_2-w_1)}=\frac{(z-z_1)(z_2-z_3)}{(z-z_3)(z_2-z_1)} \end{equation} then I get a cancellation of the $w$-variable. That would give just another complex number, $z=5i$. But what does this mean? Thanks |
Is this object a simpler Brunnian "rubberband" loop than those studied? Posted: 22 Dec 2021 12:54 PM PST The standard configuration of Brunnian "rubberband" loops shows a series of unknots each bent into a U-shape, with their ends looped around the middle of the next unknot, as shown here (drawn by David Epstein) Brunnian rubberband loop object: This has the defining Brunnian link property that all elements are interlocked, but the removal of any element causes all the others to disentangle. But it requires each element to link to the next with 8 cross-overs, and be bent around so as to form a minimum of four bights along its "length". I have been playing around with an object that I call an "exaltation of larks", since it is based on lark's-head knots to connect the elements. This reduces the number of cross-overs to 6 per pair, and allows for each element to have only two bights along its length. A visual is here exaltation of larks object: This also seems to fulfill the Brunnian definition, and is substantially simpler. I am wondering if (1) this object is already known and has another name, and/or (2) the object is not Brunnian for some reason I have overlooked. Thanks! (First time here...) Added 12/22/21: I've done a pretty good look-back in the papers at this point, and here's what I've come up with. Dale Rolfsen describes a Brunnian link which is equivalent to a five-element "exaltation of larks" (p. 69, exercise 15, Knots and Links, 1976). This is cited directly in at least one other place ("New Criteria and Construction of Brunnian Links" by Sheng Bai and Weibiao Wang https://arxiv.org/pdf/2006.10290.pdf). None of those authors note that this is a simpler version of the Brunnian chain than the standard, and most compendiums of Brunnian links do not show this pattern at all. So I think I have what I need here, thanks for the help! |
Where's my mistake (parametrized surface integral)? Posted: 22 Dec 2021 12:48 PM PST UPDATE: My professor tried this problem and got the same answer as me. The answer they had in the system seems to have been incorrect and my answer was correct. I'm getting credit for the question now. Thank you to those who looked over my work and helped me! The question says: "Integrate the function $H(x,y,z)=6x^2\sqrt{17-8z}$ over the parabolic dome $z=2-2x^2-2y^2,\,z\geq0$." Here's my work: $\mathbf{r}(r,\theta)=r\cos\theta\,\mathbf{i}+r\sin\theta\,\mathbf{j}+(2-2r^2)\,\mathbf{k}$ $\mathbf{r}_r\times\mathbf{r}_\theta=\left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ \cos\theta &\sin\theta & -4r \\ -r\sin\theta & r\cos\theta & 0\end{array}\right|=\mathbin{\color{red}+}4r^2\cos\theta\,\mathbf{i}\mathbin{\color{red}{+}}4r^2\sin\theta\,\mathbf{j}+r\mathbf{k}$ (EDIT: I believe the $\mathbf{j}$-component should be positive here and not negative like I had it before. Either way it shouldn't have made a difference in the answer since $||\mathbf{r}_r\times\mathbf{r}_\theta||$ is unaffected.) (EDIT 2: Oops, the $\mathbf{i}$-component should be positive too! Either way this still shouldn't have changed the answer) $\begin{align} ||\mathbf{r}_r\times\mathbf{r}_\theta||&=\sqrt{16r^4(\cos^2\theta+\sin^2\theta)+r^2}\\ &=\sqrt{16r^4+r^2}\\ &=\sqrt{r^2(16r^2+1)}\\ &=r\sqrt{16r^2+1} \end{align}$ $\begin{align} H\left(\mathbf{r}(r,\theta)\right)&=6r^2\cos^2\theta\sqrt{17-8(2-2r^2)}\\ &=6r^2\cos^2\theta\sqrt{17-16+16r^2}\\ &=6r^2\cos^2\theta\sqrt{16r^2+1} \end{align}$ $\begin{align} \iint\limits_S H(x,y,z)\,d\sigma&=\iint\limits_R H(\mathbf{r}(r,\theta))||\mathbf{r}_r\times\mathbf{r}_\theta||\,dr\,d\theta\\ &=\int\limits_0^{2\pi}\int\limits_0^1 \left[6r^2\cos^2\theta\sqrt{16r^2+1}\left(r\sqrt{16r^2+1}\right)\right]\,dr\,d\theta\\ &=\int\limits_0^{2\pi}\int\limits_0^1 6r^3\cos^2\theta(16r^2+1)\,dr\,d\theta\\ &=\int\limits_0^{2\pi}\int\limits_0^1 (96r^5+6r^3)\cos^2\theta\,dr\,d\theta\\ &=\int\limits_0^{2\pi} \left(16r^6+\frac{3}{2}r^4\right)\Bigg|_{r=0}^{r=1}\,\cos^2\theta \,d\theta\\ &=\int\limits_0^{2\pi} \frac{35}{2}\cos^2\theta\,d\theta\\ &=\int\limits_0^{2\pi} \frac{35}{2}\left(\frac{1+\cos2\theta}{2}\right)\,d\theta\\ &=\frac{35}{4}\int\limits_0^{2\pi}(1+\cos2\theta)\,d\theta\\ &=\frac{35}{4}\left(\theta+\frac{1}{2}\sin 2\theta\right)\Bigg|_0^{2\pi}\\ &=\boxed{\frac{35}{2}\pi} \end{align}$ The answer was supposed to be $35\pi$. Where did I go wrong? |
$a,b>0$ then: $\frac{1}{a^2}+b^2\ge\sqrt{2(\frac{1}{a^2}+a^2)}(b-a+1)$ Posted: 22 Dec 2021 01:17 PM PST
Anyone can help me get a nice solution for this tough question? My approach works for 2 cases: Case 1: $b-a+1>0$ then squaring both side, we get equivalent inequality: $$\frac{1}{a^4}+b^4+2\frac{b^2}{a^2}\ge2\left(\frac{1}{a^2}+a^2\right)(b^2+a^2+1-2ab-2a+2b)$$ Or: $$\frac{1}{a^4}+b^4\ge\frac{2}{a^2}(a^2+1-2ab-2a+2b)+2a^2(b^2+a^2+1-2ab-2a+2b)$$ The rest is so complicated. Is there nice idea etc: AM-GM, C-S to prove this inequality. Case 2: $b-a+1<0$ which is obviously true. I hope we can find a better approach for the inequality. Thank you very much! |
Posted: 22 Dec 2021 01:08 PM PST My son uses a lot of LEGOs that we have to clean up every night before he goes to bed. The box we use for them is a little on the small side, so we often find that the LEGOs are stacked too high for the lid to close. He suggested that he stick the LEGOs together into long "towers" before he puts them away to make them fit better, and I said that was a good idea but now I'm not really sure. On the one hand, stacking bricks together makes them have zero space in between them, eliminating air gaps in the container. However, while the "chunks" themselves have no wasted space inside them, it seems like they don't pack as tightly against each other when thrown into the box (of course, if carefully sized for the exact dimensions of the box and placed there in an organized way, that would be optimal, but we don't have time for that). What I'm wondering is whether stacking the bricks into towers or cubes actually might make them pack worse. To simplify this real-world question for the theoretical domain, let's assume:
Then the question is: What packs the tightest?:
|
What is the probability that the best candidate was hired? Posted: 22 Dec 2021 01:00 PM PST I have encountered some issues with the following question and have also seen the solution to it, but yet don't seem to understand why. An employer is about to hire one new employee from a group of N candidates, whose future potential can be rated on a scale from 1 to N. The employer proceeds according to the following rules: (a) Each candidate is seen in succession (in random order) and a decision is made whether to hire the candidate. (b) Having rejected m-1 candidates (m>1), the employer can hire the mth candidate only if the mth candidate is better than the previous m-1. Suppose a candidate is hired on the ith trial. What is the probability that the best candidate was hired? I am attaching the solution provided in the solutions manual. I will share why I don't understand this solution If we suppose that there are three candidates A, B, and C who are ranked as 1, 2, and 3 respectively. Then there are 6 ways (3!) for them to be interviewed which are: $1. A,B,C$ $2. A,C,B$ $3. B,A,C$ $4. B,C,A$ $5. C,A,B$ $6. C,B,A$ I may be wrong in setting it up this way already, but my thought process was the following. If we assume that the second candidate was hired and we want to find the probability that he was the best of the $3$ (in this example candidate C), then what I did was select the options that were still "Valid". For example, we cannot say that options $3, 5$ and $6$ are valid because the second candidate interviewed was worse than the first one. So we only have three valid opitons remaining ($1, 2$ and $4$). Since they are all equiprobable (if they are not please let me know why not, need to have hope in humanity restored) then the probability of having the selected the best candidate would be $2/3$. If we were to follow the answer given in the solution it would be $1/2$. Thanks for any help a good fellow citizen can provide, I believe that my interpretation may be incorrect and don't know what other approach to consider since it seems that there is not much to grab on to. |
How to Find the null space and range of the orthogonal projection of $\mathbb{R}^3$ on a plane Posted: 22 Dec 2021 01:03 PM PST Find the null space and range of the orthogonal projection of $\mathbb{R}^3$ on the plane $x+y-z = 0$ What are its nullity and rank? So my first idea was to find 3 vectors that span all of $\mathbb{R}^3$ with one of them being the normal vector. Points on plane: $P(1,1,2),Q(1,-1,0),R(4,-4,0)$ $\vec{n} = \langle1,1,-1\rangle$ $\vec{PQ} = \langle0,-2,-2\rangle$ $\vec{PR} = \langle3,-5,-2\rangle$ Put the vectors into a matrix and reduce... $$A = \begin{bmatrix} 1&0&3\\1&-2&-5\\-1&-2&-2 \end{bmatrix} \rightarrow RREF(A)= I $$ This implies that the rank is equal to $3$ and the nullity is $0$. I'm not sure whether or not I'm heading in the right direction with this one. Any pointers would be vastly helpful. |
Let $S_n$ be a Simple Random Walk. What is $E[S_m|S_n]$ if $m < n$? Posted: 22 Dec 2021 01:05 PM PST Let $S_n = W_1 + ... + W_n$ be a simple random walk with $W_i$ IID and $P[W_i = 1] = P[W_i = -1] = 1/2$. Find $E[S_m | S_n]$ when (a) $m > n$ and (b) $m < n$. For part (a), I get the answer of $S_n$. However, I do not know how to do part (b). How to find the past give the future? Thanks. |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment