Recent Questions - Mathematics Stack Exchange |
- understanding the power function when building UMP test
- Pivoting fails to preserve total unimodularity
- What does it mean for a Markov CHAIN to be recurrent (not just a state)?
- Application of Hölder's inequality: $(a+b)^t \le 2^t(a^t+b^t)$ for $t\ge 1.$
- Does the following inequality for $H^s$ norm hold?
- What is the general method for generating the possible combinations in a partition of $N$ sets?
- Confused on Properties of Residuals in the Conjugate Gradients Method
- GCD Proof of Odd Numbers
- Inequality between $\sup |f^{(k)}(x)|$ for $k=0,1,2$
- The integer solution (x,y,z) of equation ax^2+by^2+cz^2=0
- The period of an arbitrary function $\phi$
- Can you prove these Abel-Plana integral-equations?
- How to quickly inverse a permutation by using PyTorch?
- Give an example where the linear program is infeasible and the dual program is feasible and unbounded
- How do you get from a model to a set of differential equations?
- How do we define density (real one, not mathematical)?
- Show that this language has (or does not have) unique writing for words
- Stuck on proving Bochner's formula
- Order-preserving bijection from the negative irrationals to the irrationals less than $\pi$
- If $Y$ is a closed subspace and $Z$ has finite dimension, then the set $Y+Z$ is closed
- Calculating an area by using Monte Carlo method
- Construct PDA for given language.
- Maximum likelihood estimator for $\mu=\sigma^2=\phi$
- Solution of system of differential equations using Laplace
- Integral of Euler.
- If $V$ is the standard representation of $S_n$, how to show that $\bigwedge^k V$ is irreducible $(0 \leq k \leq n-1)$?
- Why does an exponential function eventually get bigger than a quadratic
- show that $\sum_{i=1}^{p-1}2^i\cdot i^{p-2}\equiv \sum_{i=1}^{\frac{p-1}{2}}i^{p-2}\pmod p$
- How to prove a Minimal Surface minimizes Surface Tension
- Stiffness matrix on finite element method: singular or not?
understanding the power function when building UMP test Posted: 27 Mar 2021 08:12 PM PDT The fundamental for neyman pearson hypothesis testing is that we have level of significance $\alpha = P(\text{reject } H_0|H_0 \text{is correct}). 1-\text{Power} = P(\text{accept } H_0|H_1 \text{ is correct})$ when building the test $\psi$ we have that $E_{H_0}(\psi) = \alpha$. How is this any relevant to the power function? I've seen in some cases that Power $= P(E_{H_0}(\psi) = \alpha | \text{free parameter})$, I have no idea how it connects to Power = $E_{H_1}(\psi)$. |
Pivoting fails to preserve total unimodularity Posted: 27 Mar 2021 08:11 PM PDT It is well known that a $\{0, \pm 1\}$ matrix $A$ is totally unimodular (TUM) if and only if matrix $A'$ obtained from $A$ by pivoting operation is totally unimodular. Here pivoting an element $a_{ij}\neq 0$ is defined as first multiplying the row $a_i$ by $1/a_{ij}$ to make $a_{ij}'=1$, and then add row $a_i'$ to the other rows $a_k, k\neq i$ such that $a_{kj}=0$ (same as the elementary row operation). My question is that for the following matrix, it seems that TUM is not preserved under pivoting. $A=\begin{pmatrix} 1 &1&1 \\ 1&-1&0\\ 1&0&0\\ \end{pmatrix}.$ Here $A$ is not TUM since $\begin{pmatrix} 1 &1 \\ 1&-1\\ \end{pmatrix}$ has determinant of $-2$. On the other hand, if we pick $a_{31}$ as the pivot, then after pivoting the first two rows, the matrix reduces to $A'=\begin{pmatrix} 0 &1&1 \\ 0&-1&0\\ 1&0&0\\ \end{pmatrix},$ which is TUM. I think there must be some mistakes in my understanding but I didn't figure out where. Thanks. |
What does it mean for a Markov CHAIN to be recurrent (not just a state)? Posted: 27 Mar 2021 08:11 PM PDT There are many resources offering equivalent definitions of recurrence for a state in a Markov Chain - for example, state x is recurrent if, starting in state x you will eventually return to state x almost surely. However, I have been asked to show that a certain Markov Chain is recurrent. Half an hour of Google searching has not been able to answer whether this means existence of a recurrent state, or that all states are recurrent. So I put it to the community - what does it mean for a Markov Chain to be recurrent? |
Application of Hölder's inequality: $(a+b)^t \le 2^t(a^t+b^t)$ for $t\ge 1.$ Posted: 27 Mar 2021 08:15 PM PDT While searching for a proof of the algebra property of Sobolev spaces ($H^s(\mathbb R^n)$ is an algebra when $s > n/2$), I found these notes. On page two the author states that if $t \ge 1$ then Hölder's inequality implies $$((1+|x|)+(1+|y|))^t\le 2^t((1+|x|)^t + (1+|y|)^t).$$ I don't see why Hölder's inequality implies that the above inequality is true. Perhaps the author meant that for $t\ge 1$ and $a=(1+|x|),$ $b=(1+|y|)$ we obtain $$(a+b)^t \le 2^t(a^t+b^t).$$ Something similar to Cauchy/Young's inequality might work here since $$t = 1 \implies (a+b)\le 2(a+b),$$ $$t = 2 \implies (a+b)^2\le 4(a^2+b^2),$$ and the inequality covers the general case for $t\ge 1$. Trying $$(a+b)^t=(a+b)(a+b)^{t-1}\le \frac{1}{2}\left(\frac{2(a+b)^2 + (a+b)^{2t}}{(a+b)^2}\right)$$ doesn't look like the correct approach. Does the inequality follow from either Young's or Hölder's inequality? |
Does the following inequality for $H^s$ norm hold? Posted: 27 Mar 2021 08:16 PM PDT Assume $f(x,t):\Bbb{R}^d\times\Bbb{R} \to \Bbb{C}$ is a nicely behaved function ,such that all the expression below is well defined.Does the inequality for $H^s$ norm holds $$\|\int_0^tf(x,s)ds\|_{H^s_x(\Bbb{R}^d)}\le \int_0^t\|f(x,s)\|_{H^s_x(\Bbb{R}^d)}ds$$ This seems to be Minkovski inequality correct? Where $H^s$ norm is defined to be $$\int|\hat{f}(\omega)|(1+|\omega|^{2s})d\omega = \|f\|_{H^s}$$ Maybe I need to use Holder inequality?But I find it is fractional order.If we view $H^s$ norm as absolute value,then it's triangle inequality |
What is the general method for generating the possible combinations in a partition of $N$ sets? Posted: 27 Mar 2021 08:14 PM PDT For example, with two sets we have a partition of size $3$: $A \setminus B$, $B \setminus A$, and $A \setminus B$. With three sets we have a partition of size $7$: $A \cap B \cap C$, $A \cap (B \setminus C)$, etc., and $A \setminus (B \cup C)$, etc. For $N$ sets, what is the partition size, and how can I generate the set of partition members? |
Confused on Properties of Residuals in the Conjugate Gradients Method Posted: 27 Mar 2021 08:01 PM PDT I'm reading through this explanatory paper on the conjugate gradient method. I've been understanding the 'Conjugate Directions' (pdf page 27-36) algorithm (which is like conjugate gradients, but instead of using Gram-Shmidt conjugation on the residuals, you do it on some random independent vector set). Now, we are using the residuals at each step as the directions. There are a few things i don't understand that the author of the paper seems to be talking about as "benefits" of using the residuals. He keeps mentioning that the residuals are "orthogonal to all residuals before it". Is there an intuitive way to think about this? My question is if this is true, why now? In the Steepest Descent algorithm, you can take multiple steps in the direction of the residual, but they often were in similar directions. Why, in this case, do they never overlap? My biggest confusion is regarding his conclusion on why using the residuals makes computing Gram-Shmidt Conjugation easier. I don't understand the first paragraph on PDF page 37 (not actual page 37 of paper). Why does the fact that that the directions space is a Krylov space tell us that the residual i is A-orthogonal to the directions space i-1? Also, I can't convince myself the second sentence in that paragraph is right. "the fact that the next residual r_(i + 1) is orthogonal to space D_(i + 1) from Equation 39" seems wrong. Doesn't Equation 39 tell us that r_(i + 1) is orthogonal to space D_(i)? Thanks, A |
Posted: 27 Mar 2021 07:59 PM PDT I found this problem in Discrete Mathematics and Its Applications while studying for an exam and am having trouble solving it. Prove that for $a,b\in N^*, a \ge b, \gcd(a,b) = \gcd(a,b, a-b)$ if $a$ and $b$ are odd. My current idea was to use the principles in the Euclidean Algorithm to somehow prove it, but I'm running into a wall. Any help would be great, thanks! |
Inequality between $\sup |f^{(k)}(x)|$ for $k=0,1,2$ Posted: 27 Mar 2021 07:52 PM PDT Let $f$ be twice differentiable on an interval $I$. Let $M_k=\sup \limits_{x\in I} |f^{(k)}(x)|$ for $k=0,1,2$. Show that if $I=[-a,a]$, then $$|f'(x)|\leq \dfrac{M_0}{a}+\dfrac{x^2+a^2}{2a}M_2;$$ Remark: By interval I mean that $I$ could be open, closed or half-open interval. After some attempts actually I was able to prove this inequality. My question is not about the solution but actually about the constants $M_k$. Note that if $I=[-a,a]$ then $M_0,M_1$ exist because $f$ and $f'$ are continuous functions. But it is possible that $M_2$ could be infinity, right? Can anyone explain how to handle this case, please? |
The integer solution (x,y,z) of equation ax^2+by^2+cz^2=0 Posted: 27 Mar 2021 07:51 PM PDT Please prove the following theorem . $ax^2+by^2+cz^2=0 \ \cdots(*) \ $ where $a$,$b$ and $c$ are nonzero integers . $(x_0,y_0,z_0) \ $ is an integer solution of equation $(*)$ with $z_0\neq0$ . Then all integer solutions $(x,y,z)$ with $z\neq0$ of equation $(*)$ are of the form $$x=±\frac{D}{d}(-ax_0s^2-2by_0sr+bx_0r^2)$$ $$y=±\frac{D}{d}(ay_0s^2-2ax_0sr-by_0r^2)$$ $$z=±\frac{D}{d}(az_0s^2+bz_0r^2)$$ where $r$ and $s>0$ are coprime integers, $D$ is a nonzero integer and $d|2a^2bcz_0^3$ is a positive integer. |
The period of an arbitrary function $\phi$ Posted: 27 Mar 2021 07:46 PM PDT Suppose $\phi\colon X \to Y$, and we observe that applying $j$ iterations of $\phi$ to some $x\in X$ gives us back $x$. Why does this tell us that the period of $\phi$ divides $j$, rather than is $j$? And what would we have to do in order to conclude that $j$ is the period of this map? |
Can you prove these Abel-Plana integral-equations? Posted: 27 Mar 2021 07:52 PM PDT @Dark Malthorp got me started on these equations, here, and by trial and error, I got them concluded in a way that looks beautiful to me. But I would like to be able to present proof with them. Any help here? Here is some Mathematica code with results that justify them to me: |
How to quickly inverse a permutation by using PyTorch? Posted: 27 Mar 2021 07:42 PM PDT I am confused on how to quickly restore an array shuffled by a permutation. I wrote the following codes based on matrix multiplication (the transpose of permutation matrix is its inverse), but this approach is too slow when I utilize it on my model training. Does there exisits a faster implementation? |
Posted: 27 Mar 2021 08:14 PM PDT Give an example of a 2x2 matrix A, a 2x1 vector b and a 2x1 vector c such that the linear program is infeasible and the dual program is feasible and unbounded I understand the question; I am just having trouble visualizing the graph of a feasible, unbounded dual program. |
How do you get from a model to a set of differential equations? Posted: 27 Mar 2021 07:44 PM PDT Consider the case of a predator-prey system. Morally speaking, there should be a way to take a model (which is to say, a complete description of a predator-prey dynamics which can be executed by a computer program) and by inspecting it derive the Lotka-Volterra Equations, or any other kind of differential equation, logistic map, or quantitative relationship. Both models and differential equations are descriptions of system, and because the former is more comprehensive and the latter just a fragment, there must be a way, morally speaking, to derive the latter by inspecting the former. |
How do we define density (real one, not mathematical)? Posted: 27 Mar 2021 07:58 PM PDT We all know from school that density is defined as mass over volume $$\rho = \frac{m}{V}$$ I'm wondering what the mathematically correct definition of density is. I'm considering two options. OPTION 1. Is density defined by the means of multiple integral? $$m = \iiint \rho \, \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z$$ This way density is $\rho = \cfrac{\partial^3 m}{\partial x \partial y \partial z}$. OPTION 2. Is density defined by the means of volume integral? $$m = \int \rho \, \mathrm{d}V$$ This way density is $\rho = \cfrac{dm}{dV}$. If Option 2 is correct, then what is volume integral? In my engineering calculus course, I learnt about multiple and curvilinear integrals. Curvilinear integrals include line integrals (of two types) and surface integrals (of two types). I assume that volume integral is a curvilinear integral, but I'm not sure. If volume integral is a curviulinear integral, then what is the way of calculating it in Decart (Cartesian) coordinates? I mean that there are formulas to calculate line integrals in Cartesian coordinates (one formula for each type of the line integral) and there are formulas to calculate surface integrals in Cartesian coordinates (again, each type of the surface integral has a formula to calculate it in Cartesian coordinates); is there an analogous formula for the volume integral? If both options 1 and 2 are not correct, then what is the mathematically rigorous way to define density? P.S. I did read this post. But I do still have my question unanswered. Is density a mass derivative over volume or a third mass derivative over three Cartesian coordinates? In other words, I'm trying to figure out is there such a thing as a coordinate along a volume (I know about coordinate along a line - hence, line integral; I know about coordinate along a surface - hence, surface integral; but is there volume integral in the same sense ...) I found an answer to a half of my question here. I.e., the answer to the post I've cited basically validates that there's such thing as volume integral and we can map it to Cartesian coordinates using Jacobian (the same way as we do for surface integrals). But the second part of my question is still not answered: how do we introduce density to physics, through multiple integral or through volume integral? |
Show that this language has (or does not have) unique writing for words Posted: 27 Mar 2021 07:54 PM PDT I have the following language: Let $A = {a,e,i,o,u} , S = \{l\}, \Sigma = A \cup S$. Then $L \in \Sigma^*$ is defined by the rules:
I'm asked to prove or disprove (I'm trying to prove it since I'm convinced it holds) that this language has a unique writing for words, that is, if a word in $L$ has length greater than $1$, then there is a unique construction chain (up to permutation) for it, In other words, if $lX_1\dots X_nl = lY_1 \dots Y_kl$ then $n = k$ and for all $i$, $X_i = Y_i$ My strategy for proving this was: Prove that no initial section of a word (except for the word itself) can be a word. Then using a property that says that since $X_1$ and $Y_1$ are initial segments of the same expression I must have (without loss of generality) $Y_1 = X_1E$ for some expression $E$. If $E$ is not empty, then $X_1$ can't be a word since it is a nontrivial initial segment of the word $Y_1$. Then inductively it follows that all words on both sides are equal However, I'm struggling to prove this. Any help is appreciated |
Stuck on proving Bochner's formula Posted: 27 Mar 2021 07:53 PM PDT
My attempt It suffices to prove the formula at any point $p$. Choose a normal frame $\{e_i\}_{i=1}^n$ near $p$. Then for any smooth function $f$ we have \begin{align*} \Delta f =(\nabla^2f)(e_i,e_i) =(\nabla_{e_i} df)(e_i) = e_i (df,e_i)- (\nabla_{e_i}e_i,df) \end{align*} and hence \begin{align*} \Delta f(p)=(e_ie_if)(p) \end{align*} Thus the first term is \begin{align*} \boxed{\frac{1}{2} \Delta |\nabla u|^2(p) = \frac{1}{2} (e_ie_i (e_ju)^2)(p) = (e_ie_j u)^2(p)+e_ju(p)\cdot e_ie_ie_ju(p)}\tag{1} \end{align*} Now we compute the second term. Note that \begin{align*} |\nabla^2 u| = \left( (\nabla^2 u)(e_i,e_j) \right)^2 = \left( (\nabla_{e_j} du)(e_i) \right)^2 = \left( e_j (du,e_i) - (du,\nabla_{e_j}e_i) \right)^2 \end{align*} and hence \begin{align*} \boxed{|\nabla^2 u|(p)= (e_je_i u)^2(p)}\tag{2} \end{align*} Now we compute the third term. Note that \begin{align*} \text{Ric}(\nabla u,\nabla u) &= R(e_i,\nabla u,e_i,\nabla u) = R(e_i,e_j,e_i,e_k) e_ju \cdot e_k u\\ &= \left(\left<\nabla_{[e_i,e_j]} e_i,e_k\right>-\left<[\nabla_{e_i},\nabla_{e_j}]e_i,e_k\right>\right) e_ju \cdot e_k u \end{align*} and hence \begin{align*} \boxed{\text{Ric}(\nabla u,\nabla u)(p) =\left<\nabla_{e_j}\nabla_{e_i}e_i,e_k\right>e_ju \cdot e_k u\Big|_p -\left<\nabla_{e_i}\nabla_{e_j}e_i,e_k\right>e_ju \cdot e_k u\Big|_p}\tag{3} \end{align*} Now we compute the last term. Note that \begin{align*} g(\nabla\Delta u,\nabla u) &= \left<\nabla\Delta u,e_i\right>\left<\nabla u,e_i\right>\\ &= e_j \left(e_i e_i u- \left<\nabla_{e_i}e_i,\nabla u\right>\right) e_ju\\ &= e_j e_i e_i u\cdot e_ju- e_j\left<\nabla_{e_i}e_i,\nabla u\right>\cdot e_ju \end{align*} and hence \begin{align*} \boxed{ g(\nabla\Delta u,\nabla u)(p) =(e_j e_i e_i u)(p)\cdot e_ju(p) -\left<\nabla_{e_j}\nabla_{e_i}e_i,e_k \right>\Big|_pe_ku(p)\cdot e_ju(p)}\tag{4} \end{align*} Thus $(1)-(2)-(3)-(4)=0$ is equivalent to that $$ \left<\nabla_{e_i}\nabla_{e_j}e_i,e_k\right>e_ju \cdot e_k u\Big|_p=0 $$ I got stuck on proving this. What's wrong with my proof? Any helps will be highly appreciated! |
Order-preserving bijection from the negative irrationals to the irrationals less than $\pi$ Posted: 27 Mar 2021 07:53 PM PDT Is there an easy to describe (using a formula or something more-or-less constructive, no zig-zag argument) an order-preserving bijection between the negative rational numbers and the rational numbers less than $\pi$? Let $P$ be the set of all irrational numbers. Let $G$ be the set of all negative irrational numbers, let $H$ be the set of all irrational numbers greater than or equal to $\pi$, and let $Y=G\cup H$. Then $P$ and $Y$ are order-isomorphic, i.e. there is an order-preserving bijection between them. One way to see this is to first construct an order-preserving bijection between the negative rational numbers and the rational numbers less than $\pi$, using a back-and-forth (zig-zag) argument (like the usual proof that every countably infinite dense linear order with no first or last element is order-isomorphic to the rationals). (Alternatively use that the negative rationals are a countably infinite dense linear order with no first or last element, and so are the rationals less than $\pi.$) Once we have this bijection we could (uniquely, e.g. using Dedekind cuts) extend it to an order-preserving bijection between $(-\infty,0)$ and $(-\infty,\pi)$ (and in fact between $(-\infty,0]$ and $(-\infty,\pi]$, but I prefer to only look at the order-preserving bijection between $(-\infty,0)$ and $(-\infty,\pi)$.) Note that the latter is an order-preserving bijection that sends rationals to rationals, and irrationals to irrationals. Taking it, together with the identity on $[\pi,\infty)$ (and restricting to the irrationals only) we get the required order-preserving bijection between $P$ and $Y$. My question is whether there is an easier way to describe an order-preserving bijection between $P$ and $Y$. Or more "constuctive" (in one way or another), or using some easy formula. E.g. one easy order-preserving bijection between $(-\infty,0)$ and $(-\infty,\pi)$ is translation by $\pi$, except it sends rationals to irrationals (and sends some irrationals to rationals, though it sends most irrationals to irrationals). I feel there should be no easy way, since any easy way would send $0$ to a rational number (because that is in the nature of "easy" bijections, I would think), but $0$ must go to $\pi$ (if we extend, and get an order-preserving bijection between $(-\infty,0]$ and $(-\infty,\pi]$). (Or $0$ may go to some other irrational, even when we don't necessarily use the identity on $[\pi,\infty)$, but at any rate $\pi$ must go to an irrational, and $0$ must "simultaneously" go to a rational and to that irrational to which $\pi$ goes, which clearly is impossible.) So, my related question is: Does anybody know what I am talking about ... and could you please give me directions, references that would confirm my guess that there is no "easy" order-preserving bijection between the negative irrationals and the irrationals strictly less than $\pi$. (Equivalently, that there is no "easy" order-preserving bijection between the negative rationals and the rationals less than $\pi$.) (Or, perhaps there is such an order-preserving bijection that might qualify as "easy?") The answer would need to include a mathematically precise definition of "easy" (with the risk that I would not be able to understand it, but that is another matter, just try your best to come up with what you believe is an appropriate answer). What area of mathematics is involved (what books do I need to read)? Regarding the extension of an order-preserving bijection from the rationals to the reals (if someone needs to see the details), see Extension of order-preserving bijection from rationals to reals. A closely-related question seems to be: Order preserving bijection from ${\mathbb Q}\times{\mathbb Q}$ to $\mathbb Q$, where one asks for "wondering if something simpler could be given, more analytic." There are two answers there, the accepted one starts with "surreal numbers up to generation $\omega$," which is not something familiar to me. I would need to take a more careful look at those answers, but I feel that my version of this question is "different" because $\pi$ is an endpoint (and the irrationals rather than rationals are involved), and there are no endpoints in ${\mathbb Q}\times{\mathbb Q}.$ Regarding how I came up with this, I don't remember anymore, but I was thinking of subsets of the reals that are not $F_\sigma$ but are reverse order-isomorphic to themselves (the set $P$ being one example with $p\mapsto-p$), and at some point realized that there seemed to be no "easy" order-isomorphism between certain sets, and felt curious to find out more about this, as asked above. (I am puzzled there is no tag "linear-orders." There is "order-theory," but that is too general, seems mostly about partial orders(?), and there is "well-orders," but this is too narrow.) |
If $Y$ is a closed subspace and $Z$ has finite dimension, then the set $Y+Z$ is closed Posted: 27 Mar 2021 07:51 PM PDT So, we consider a Normed space $X$ and two subspaces $Y,Z$. If $Y$ is a closed subspace and $Z$ has finite dimension, then we need to prove that the set $Y+Z$ is closed. |
Calculating an area by using Monte Carlo method Posted: 27 Mar 2021 08:16 PM PDT Let B and C be regions in $\mathbb{R}^2$ such that $ B \subset C $. Let's denote their areas $ A(B) \ \mathrm{and} \ A(C)$. We know $A(C)$ but not $A(B)$. We want to find out that area. We know that $$ \mathbb{P}\left(\left(x{,}\ y\right)\in B \mid (x, y) \in C\right)=\frac{A\left(B\right)}{A\left(C\right)}.$$ We select randomly N points in C and notice that $n_i $ of those are in B. We repeat this M times. Mean of those points in B is therefore $$ \sum_{i=1}^M\frac{n_i}{M}.$$ By law of large numbers we know that $$ \lim_{M\rightarrow\infty}\sum_{i=1}^M\frac{n_i}{M}=\mathbb{E}\left(n_i\right)\ .$$ Let's divide the equation by the number of randomly chosen points N: $$ \lim_{M\rightarrow\infty}\sum_{i=1}^M\frac{n_i}{MN}=\frac{\mathbb{E}\left(n_i\right)\ }{N}$$ Now I am having trouble to show that $$\frac{\mathbb{E}\left(n_i\right)\ }{N}=\mathbb{P}\left(\left(x{,}\ y\right)\in B \mid (x, y) \in C\right)=\frac{A\left(B\right)}{A\left(C\right)} .$$ By the whole idea of Monte Carlo Simulation that should be true, right? Here is how far I got: $$ \frac{\mathbb{E}\left(n_i\right)}{N}=\frac{1}{N}\sum_{i=0}^Ni\cdot\left(\frac{A\left(B\right)}{A\left(C\right)}\right)^i\cdot\left(1-\frac{A\left(B\right)}{A\left(C\right)}\right)^{N-i},$$ where $$ \mathbb{P} (n_i=i)=\left(\frac{A\left(B\right)}{A\left(C\right)}\right)^i\cdot \left(1-\frac{A\left(B\right)}{A\left(C\right)}\right)^{N-i}.$$ Could someone help me complete the "proof"? |
Construct PDA for given language. Posted: 27 Mar 2021 07:43 PM PDT The problem is to construct a PDA for: L = {a^n b^m | 1 <= n < m} and L = {a^n b^m | 1 <= n <= m}. If a language is CFL, should be defined a DPDA, otherwise an NPDA. So far I've done these transitions: (z is bottom stack symbol, q2 is final state) (q0, epsilon, epsilon), (q0, z) This works for the first language, but I'm not sure how to adapt this to make it include n = m for the second language. Any help and explanation with steps is appreciated. |
Maximum likelihood estimator for $\mu=\sigma^2=\phi$ Posted: 27 Mar 2021 08:15 PM PDT I got a question about maximum likelihood estimators. I have for independent normally distributed stochastic variable found the maximum likelihood estimators $\hat{\mu}=\frac{1}{n} \sum_{i=1}^n y_i$ and $\hat{\sigma^2}=\frac{1}{n}\sum_{i=1}^{n}(y_i-\mu)^2$ and then I have to compare these with the likelihood estimator when we have that $\sigma^2=\mu=\phi$ as I have found as $\hat{\phi}(Y_1,...,Y_n)=\frac{-n+\sqrt{n^2 +4n \sum_{i=1}^{n}Y_i}}{2n}$. How will you compare the two results? |
Solution of system of differential equations using Laplace Posted: 27 Mar 2021 07:43 PM PDT The question is For the circuit Shown in Figure. Find the currents using Laplace transformation and without using Laplace transformation assuming zero currents and charge when $t=0.$ \begin{align*} I'_1 =&-I_1R_1/L+I_2 R_1/L + E/L &\\ I'_2 =&-I_2/(C(R_1+R_2))+(-I_1R_1/L+I_2 R_1/L + E/L)R_1/(R_1+R_2) \end{align*} The first part was done using $\left[ \begin{array}{r} I'_1\\I'_2 \end{array} \right]=\left[ \begin{array}{rr} -R_1/L & R_1/L \\ -R^2_1/(L(R_1+R_2)) &-1/(C(R_1+R_2))+( R^2_1/(L(R_1+R_2) ) \end{array} \right] \left[ \begin{array}{r} I_1\\I_2 \end{array} \right]+ \left[ \begin{array}{r} E/L\\ER_1/(L(R_1+R_2)) \end{array} \right] $ and the answer is $\mathbf{y}=139.44\mathsf{e}^{-1.54 t}\left[ \begin{array}{rr} 2 \\ 0.46 \end{array} \right] -47.58 \mathsf{e}^{-0.26 t}\left[ \begin{array}{rr} 2 \\ 1.74 \end{array} \right] +\left[ \begin{array}{rr} 100 \\ 0 \end{array} \right]$ Now for the second part using Laplace transformation. I'm not getting the correct answer for instance $\left[ \begin{array}{rr} s+R_1/L & -R_1/L \\ R^2_1/(L(R_1+R_2)) &s+1/(C(R_1+R_2))-( R^2_1/(L(R_1+R_2) ) \end{array} \right]\left[ \begin{array}{r} I_1(s)\\I_2(s) \end{array} \right] = \left[ \begin{array}{r} E/sL\\ER_1/(sL(R_1+R_2)) \end{array} \right]$ for $i(t),$ I'm getting $\displaystyle \frac{400\,\sqrt{41}\,{\mathrm{e}}^{-\frac{9\,t}{10}} \,\sinh \left(\frac{\sqrt{41}\,t}{10}\right)}{41}$ Why I'm not getting the same answer when using Laplace Note that (R_1=2 ) (R_2=8 ) (L=1) H, (C=0.5) F, (E=200) V. |
Posted: 27 Mar 2021 07:58 PM PDT Here is an integration problem I found in an old book: Integrate $$\int\frac{1+x^2}{1-x^2}\frac{dx}{\sqrt{1+x^4}}$$ The integral is attributed to Euler. After another substitution $z=y^2$ get $$\frac{1}{2\sqrt{2}}\int\frac{dz}{\sqrt{z^2-1}}=\frac{1}{2\sqrt{2}}\cosh^{-1}z$$ so my answer is $$\frac{1}{2\sqrt{2}}\cosh^{-1}\left(\frac{1+x^2}{1-x^2}\right)^2$$ However the answer in the back is $$\frac{1}{\sqrt{2}} \sinh^{-1}\frac{\sqrt{2}x}{1-x^2}$$ After much work (!!!) I have shown that the two forms are equal to a constant. My question is: How can one solve the original integral to get the alternative answer directly. What substitution to use ? |
Posted: 27 Mar 2021 07:42 PM PDT Edit: As Nate pointed out, the hint was wrong. I added the correct answer (from Fulton & Harris) below. The hint I get: We have a decomposition of regular representation $R$=$U \oplus V$ where $U$ is the trivial representation. Since we have the expansion $$ \bigwedge^k(U \oplus V) \cong \bigoplus_{a+b=k}\bigwedge^a U \otimes \bigwedge^b V $$ It suffices to compute $(\chi_{\wedge^k R},\chi_{\wedge^k R})$, and it should be $2$. Then we can deduce that $\bigwedge^k V$ is irreducible whenever $0 \leq k \leq n-1$. I managed to compute $(\chi_{\wedge^2 R},\chi_{\wedge^2 R})$ when $n=3$. And it is indeed $2$. But I don't know how to step on. Is there any way to expand $(\chi_{\wedge^k R},\chi_{\wedge^k R})$ with respect to $U$ and $V$ (some massive computation involved I guess?). I know how to compute $\chi_{\wedge^2 R}(g)$ but how to compute $\chi_{\wedge^k R}(g)$? My guess is there is some much simpler approach for regular representation. Any hint or solution will be sincerely appreciated! In case people are confused with terminologies: |
Why does an exponential function eventually get bigger than a quadratic Posted: 27 Mar 2021 07:53 PM PDT I see this answer to this question and this one. My $7$th grade son has this question on his homework: How do you know an exponential expression will eventually be larger than any quadratic expression? I can explain to him for any particular example such as $3^x$ vs. $10 x^2$ that he can just try different integer values of $x$ until he finds one, e.g. $x=6$. But, how can a $7$th grader understand that it will always be true, even $1.0001^x$ will eventually by greater than $1000 x^2$? They obviously do not know the Binomial Theorem, derivatives, Taylor series, L'Hopital's rule, Limits, etc, Note: that is the way the problem is stated, it does not say that the base of the exponential expression has to be greater than $1$. Although for base between $0$ and $1$, it is still true that there exists some $x$ where the exponential is larger than the quadratic, the phrase "eventually" makes it sound like there is some $M$ where it is larger for all $x>M$. So, I don't like the way the question is written. |
show that $\sum_{i=1}^{p-1}2^i\cdot i^{p-2}\equiv \sum_{i=1}^{\frac{p-1}{2}}i^{p-2}\pmod p$ Posted: 27 Mar 2021 08:03 PM PDT let $p$ be odd prime number, show that $$\sum_{i=1}^{p-1}2^i\cdot i^{p-2}\equiv \sum_{i=1}^{\frac{p-1}{2}}i^{p-2}\pmod p$$ I know use this $$i^{p-2}\equiv\dfrac{1}{i}\pmod p$$so $$LHS\equiv \sum_{i=1}^{p-1}\dfrac{2^i}{i}\pmod p$$ Then I can't prove it |
How to prove a Minimal Surface minimizes Surface Tension Posted: 27 Mar 2021 08:13 PM PDT I know minimal surfaces often minimize surface area. I know its mean curvature is found by setting the divergence of the second fundamental form equal to zero. Is this the same or analogous to showing tension too is minimized by assuming the shape the surface assumes is perfectly elastic (springing back to its unperturbed state when a perturbing force is removed)? A reference at the advanced undergraduate level would be greatly appreciated. Thanks. |
Stiffness matrix on finite element method: singular or not? Posted: 27 Mar 2021 08:08 PM PDT I have to solve the problem $$ \begin{split} -\Delta u +u=& f \text{ on } \Omega \\ u=& g \text{ on }\partial \Omega\\ \end{split}$$ I don't specify $f,g,\Omega$ since they are not important for my question. Writing the weak formulation: $$ \sum_{i=1}^{N_{h}} u_{i} \Big(\int_{\Omega}\nabla\varphi_{i}\cdot\nabla\varphi_{j}\, dx+\int_{\Omega}\varphi_{i}\varphi_{j}\, dx\Big)=\int_{\Omega}f\varphi_{j}\, dx\quad\forall j=1,\dots,N_{h}$$ with $\{\varphi_{i} \} _{i=1,...N_{h}}$ is the basis made by Lagrange polynomials for the finite-dimensional space $V_{h}$, $\dim(V_{h})=N_{h}$. |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment