Thursday, June 30, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Solving an initial value problem (Systems of Linear Differential equations)

Posted: 30 Jun 2022 09:23 AM PDT

Here's the question: Solve the initial value problem \begin{equation} \begin{cases} \displaystyle x^{'}_{1}(t) = x_{1}(t) + 2x_{2}(t) + x_{3}(t) \\ x^{'}_{2}(t) = x_{1}(t) - 5x_{2}(t) + 3x_{3}(t)\\ x^{'}_{3}(t) = 3x_{1}(t) + 2x_{2}(t) - x_{3}(t)\\ \displaystyle \end{cases} \end{equation}

with initial conditions \begin{equation} \begin{cases} \displaystyle x_{1}(0) = -1 \\ x_{2}(0) = 3\\ x_{3}(0) = 19\\ \displaystyle \end{cases} \end{equation}

Note: This is an eigenvector and eigenvalue problem, so one has to find the eigenvalues of the matrix first. I find difficulty in this part. Specifically, I'm finding it difficult to simplify the determinant of $A-\lambda I$ (this has to be done so that finding the lambda values will be easy).

rule -1(3x+5) order of operation

Posted: 30 Jun 2022 09:19 AM PDT

-1(3x+5) we distribute -1 but isn't It is breaking PEMDAS order of operation Parenthesis first.

Let say f(x)=1/x. Find f^'(x), f^″(x), f^‴(x) and f^(4)(x). Deduce f^(n)(x).

Posted: 30 Jun 2022 09:12 AM PDT

Let say f(x)=1/x. Find f^' (x), f^″ (x), f^‴ (x) and f^ (4)(x). Deduce f^ (n)(x).

subgroup of finite index implies there is a short exact sequence

Posted: 30 Jun 2022 09:21 AM PDT

Given a PE cubical complex $Y$ and a map $Y\to S^1$, I am told that the induced map $\pi_1(Y)\to \pi_1(S^1)$ having image of finite index implies that there is a short exact sequence $$1\to H \to\pi_1(Y)\to \pi_1(S^1)\to 1.$$

I'm not sure why this is true, especially since the map to $\pi_1(S^1)$ is not surjective.

On the modulus of $\zeta(s)$ where $s\neq 1$, $\log M=\mathcal{O}(\log T)$

Posted: 30 Jun 2022 09:15 AM PDT

This wiki page contains the following integral for $\zeta(s)$ with $s=\sigma+it$ valid for all $s\in \mathbb{C}$:

$$(s-1)\zeta(s) = \frac{\pi}{2}\int_{-\infty}^{+\infty} \frac{(\frac{1}{2}+ix)^{1-s}}{\cosh^2(\pi x)} dx \tag{1}$$

Denote the following $M=\max_{|z|=\frac{3}{2}}{\left|\zeta\left(\frac{1}{2}+iTz\right)\right|} $ where $T>3$.

Question: Prove that as $T\to \infty$ $$\log M=\mathcal{O}(\log T) $$

So take $s=\frac{1}{2}+iTz$ so we have $s=\frac{1}{2}+\frac{3}{2}iT e^{i\theta}=\left(\frac{1-3T\sin \theta}{2}\right)+i\left(\frac{3T\cos \theta}{2}\right)$

Note that $s\neq 1$ since $s=1$ means $z=-\frac{i}{2T}$ so that $|z|=\frac{1}{2T}=\frac{3}{2}$ which gives $T=\frac{1}{3}$ and this contradicts the fact that $T>3$.

By $(1)$ we can write $$|\zeta(s)|\leq \frac{\pi}{2}\int_{-\infty}^{+\infty} \frac{\left|(\frac{1}{2}+ix)^{1-s}\right|}{\cosh^2(\pi x)} dx \tag{2}$$ $$\left|\left(\frac{1}{2}+ix\right)^{1-s}\right|=\left|\left(\frac{1}{2}+ix\right)^{1-\sigma-it}\right|=\left|\left(\frac{1}{2}+ix\right)^{1-\sigma}\right|\left|\left(\frac{1}{2}+ix\right)^{-it}\right| $$ $$\Rightarrow \left|\left(\frac{1}{2}+ix\right)^{1-s}\right|= \left(\frac{1}{4}+x^2\right)^{\frac{1-\sigma}{2}} e^{t\tan^{-1}(2x)}\tag{3}$$ So using $(3)$ in $(2)$ we have $$|\zeta(s)|\leq \frac{\pi}{2}\int_{-\infty}^{+\infty} \frac{\left(\frac{1}{4}+x^2\right)^{\frac{1-\sigma}{2}} e^{t\tan^{-1}(2x)}}{\cosh^2(\pi x)} dx \tag{4}$$ So splitting the integral in RHS of $(4)$ $$\frac{2}{\pi}|\zeta(s)|\leq \int_{0}^{+\infty} \frac{\left(\frac{1}{4}+x^2\right)^{\frac{1-\sigma}{2}} e^{t\tan^{-1}(2x)}}{\cosh^2(\pi x)} dx+\int_{-\infty}^{0} \frac{\left(\frac{1}{4}+x^2\right)^{\frac{1-\sigma}{2}} e^{t\tan^{-1}(2x)}}{\cosh^2(\pi x)} dx \tag{5}$$ In the second integral of $(5)$, subtitute $x=-y$ so we get $$\frac{2}{\pi}|\zeta(s)|\leq \int_{0}^{+\infty} \frac{\left(\frac{1}{4}+x^2\right)^{\frac{1-\sigma}{2}} (e^{t\tan^{-1}(2x)}+e^{-t\tan^{-1}(2x)})}{\cosh^2(\pi x)} dx \tag{6}$$ Hence $$\frac{2}{\pi}|\zeta(s)|\leq \int_{0}^{+\infty} \frac{\left(\frac{1}{4}+x^2\right)^{\frac{1-\sigma}{2}} (\cosh(t\tan^{-1}(2x))}{\cosh^2(\pi x)} dx \tag{7}$$ Now $\sigma=\frac{1-3T\sin \theta}{2}$ and $t=\frac{3T\cos \theta}{2}$

I am struggling to prove the required expression $\log M=\mathcal{O}(\log T)$. Please help me.

Solve $u_t=2u_{xx}+t\sin(x)$

Posted: 30 Jun 2022 09:22 AM PDT

Solve $u_t=2u_{xx}+t\sin(x)$ $t>0,0<x<\pi$
boundary conditions are:
$u(0,t)=u(\pi,t)=0$ and $u(x,0)=\sin(x)\cos(x)$

my attempt:
solving for the non homog. first and I got that $u(x,t)=\frac{1}{2}e^{-8t}\sin(2x)$ now i tried to take this and derivate and put it back into the first equation to get the non-homog. part and I got
$$\frac{-1}{2}8e^{-8t}\sin(2x)=2(\frac{1}{2}4e^{-8t}\sin(2x))+t\sin(x)$$ But from here I didn't know how to proceed any hints?

Boolean Algebras admit Separative Partial Orders

Posted: 30 Jun 2022 09:05 AM PDT

In Chapter 14 of Jech's Set Theory, he writes that if $B$ is a Boolean algebra, then $(B^+,<)$ is a separative partial order. I'm having trouble seeing why this is the case.

For the definition of a Boolean algebra and the corresponding order: enter image description here

Given a Boolean Algebra $B$, $B^+$ is given by $B\setminus \{0\}$.

A partial order is separative if for all $p,q$: if $p\not\leq q$ then there exists some $r\leq p$ that is incompatible with $q$; that is, for all $s$ it is not the case that both $s\leq r$ and $s\leq q$.

With the definitions established, I have a few questions:

  1. Jech claims it is easy to see that the ordering on $B$ is a partial order. I'm having some trouble seeing why this is the case. Showing that $u\leq u$ and $(u\leq v \wedge v\leq w)\to(u\leq w)$ are not particularly hard, but I'm having trouble seeing why $(u\leq v\wedge v\leq u\to u=v)$. I'm sure there's a simple argument but I'm not finding it.

  2. I'm not quite sure why this partial order is separative once you exclude $0$. First, is my understanding of what it means for a partial order to be separative correct? If so, I saw Asaf Karagila's answer here, but I don't understand how the calculations there yield that for all $z\leq x$, $z\perp y$; it seems like it only shows a specific one. Am I misunderstanding something?

Pair of Circles (Non intersecting)

Posted: 30 Jun 2022 09:01 AM PDT

Relative orientation of two circles

I have Two circles (At present moment, non-intersecting). Circle 1: Centre (xc1,yc1); Radius R1. Circle 2: Centre (xc2,yc2); Radius R2. An arbitrary point P (xp,yp) which will lie anywhere in the plane except inside circles. Theta1 is the angle made by the positive x-axis from centre of circle 1 to point P. Theta2 is the angle made by the positive x-axis from centre of circle 2 to point P.

How to find relation between angles theta1 and theta2?

Thank you.

Setting up a transformation keeping Minkowski metric invariant

Posted: 30 Jun 2022 09:01 AM PDT

What's wrong with this approach : let a transformation of coordinate $(x,t)\leftrightarrow (x',t')$ with $\frac{dx}{dt}|_{x'=0}=v$ (1)

Now suppose we seek linear transformations with coefficients depending on $v$ $x=A(v)x'+B(v)t',t=D(v)x'+E(v)t'$.

Now the equation (1) use differentials, so this eventually yields derivatives of the unknown coefficients and apparition of $dv=a'dt'$:a dependence on the acceleration ?

Moreover the condition $c^2t^2-x^2=c^2t'^2-x'^2$ implies a non linear system of ODE for those coefficients (which I have not solved yet).

This seems wrong since $a$ is appearing hence the coefficient should be $A(a,v)$ aso. Which means that the coefficients have infinitely many arguments when this point is iterated.

Can $p$ Adic Integers Be Expressed as Fractals?

Posted: 30 Jun 2022 09:13 AM PDT

In the book "A Course in $p$ Adic Analysis" by Alain M Robert, he describes how the $p$ adic integers can be expressed as fractals. For example, $\mathbb{Z}_3$ can be expressed as a Sierpińsky gasket shown below.

enter image description here

But the $p$ adic integers apparently can form more complex fractals. For example, here is an example of $\mathbb{Z}_5$:

enter image description here

How exactly does this work? The author does give explanations about the work behind the different models, but I am just confused how $\mathbb{Z}_p$ can be expressed in this way at an elementary level. What exactly is going on here intuitively?

Additionally, if someone could point me to resources that elaborate on this subject (ie $p$ adic integers and their relation to fractals) that would be amazing.

check whether this equilibrium is locally stable or not?

Posted: 30 Jun 2022 09:10 AM PDT

Given a system

$\dot\theta_1=\sin(\theta_1-\theta_2)+\sin(\theta_1-\theta_3)$

$\dot\theta_2=\sin(\theta_2-\theta_1)+\sin(\theta_2-\theta_3)$

$\dot\theta_3=\sin(\theta_3-\theta_1)+\sin(\theta_3-\theta_2)$

could someone help me to check the stability of equilibrium $(\theta_1,\theta_2,\theta_3)=(0,\pi,0)$ ?

Prove uniqueness of a certain arrow in the $\mathbf{Pno}$ category

Posted: 30 Jun 2022 09:10 AM PDT

This question is based on the exercise 1.2.1 from Harold Simmons book, An Introduction to Category Theory.

Consider the category $\textbf{Pno}$ whose objects are $(A,\alpha,a)$, where $A$ is a set, $\alpha:A\rightarrow A$ is a function and $a\in A$ is a distinguished element. The arrows of $\mathbf{Pno}$ are $f:(A,\alpha,a)\rightarrow(B,\beta,b)$, where $f\circ\alpha = \beta\circ f$ and $f(a)=b$.

They asked to prove that there is a unique arrow $f:(\mathbb{N},succ,0)\rightarrow (A,\alpha,a)$, for every object $(A,\alpha,a)$ of $\mathbf{Pno}$, where $succ$ is the successor function.

This arrow is given by $f(n)=\alpha^n(a)$, which verify the properties to be a $\mathbf{Pno}$-arrow. Now, in the solution section they say that a proof by induction proves that this is the only arrow. However, I don't see how an induction (I guess this induction must be done over $n$) proves the uniqueness of this arrow.

Could anybody give me a hand with this? I'd really appreciate it.

Which set is an algebraic set?

Posted: 30 Jun 2022 09:06 AM PDT

Are the sets: $x^2 + y^2 -r^2 =0$, $x^2 + y^2 -r^2 \le 0$ an algebraic set in $\mathbb{A}^2 (\mathbb{R})$?

I think that the first is because we have equation (polynomial) which is equal to 0. Then the second example isn't.

the angle formed by incenter, vertex and circumcenter

Posted: 30 Jun 2022 09:20 AM PDT

In the triangle $\triangle ABC$, $I$ is the incenter, $O$ is the circumcenter. Prove that $\angle ICO=\frac{|\angle A-\angle B|}{2}$.

enter image description here

I found this conclusion in steps of some proof. But I spent a day on it without answer. I used GeoGebra to test it. It was correct. Can anyone help me? Thanks.

I tried to connect $AI,BI$. Assuming $\angle B >\angle A$. I can draw $\angle IBE=\frac{\angle B-\angle A}{2}$. But I do not see how to relate it to the question angle.

enter image description here

Simplification of nonlinear constraint for optimization

Posted: 30 Jun 2022 09:21 AM PDT

I solve a nonlinear optimization problem with constraint

$$ \sum_{j=1}^S \ln \Bigg( \Big(1 -x_j c_k + y_j d_k\Big)^2 + \Big( x_j d_k + y_j c_k \Big)^2 \Bigg) \leq 0, \: k = 1, \dots , K $$

where the optimization variables are $\boldsymbol x, \boldsymbol y \in \mathbb R^S$ and the coefficients $\boldsymbol c , \boldsymbol d \in \mathbb R^K$. I derived this constraint from

$$ \prod_{j=1}^S \Big(1 -x_j c_k + y_j d_k\Big)^2 + \Big( x_j d_k + y_j c_k \Big)^2 \leq 1, \: k = 1, \dots , K $$

where the sum of logs seemed more appealing to me than the product of squares.

Does anyone see further simplifications / reformulations of this constraint (e.g. to conic constraints, "less" nonlinear one, ...)?

To be honest, I have not much hope since I think this problem is inherently non-convex for $\boldsymbol x, \boldsymbol y, \boldsymbol c,\boldsymbol d$ taking negative values, but I want to give it a shot, maybe someone sees something.

Faithfulness of a given operator family on von Neumann tensor product

Posted: 30 Jun 2022 09:07 AM PDT

Let $G$ be a locally compact (Hausdorff) group and $H$ a Hilbert space. Given $\xi,\eta\in C_c(G)$, we denote by $\kappa_{\xi,\eta}$ the normal (i.e. ultraweakly continuous) linear functional on $B(L^2(G))$ given by $T\mapsto \langle T\xi,\eta\rangle$ and let $\kappa_{\xi,\eta}\overline{\otimes}\text{id}_{B(H)}\colon B(L^2(G))\overline{\otimes}B(H)\to B(H)$ be the associated ultraweakly continuous and completely positive slice map.

I want to conclude the following for an element $x\in B(L^2(G))\overline{\otimes}B(H)$: $$x=0 \Leftrightarrow \kappa_{\xi,\eta}\overline{\otimes}\text{id}_{B(H)}(x)=0 \quad \text{for every} \ \xi,\eta\in C_c(G) $$

Apparently, this should be "clear enough". All hints are appreciated.

Is the set $\{(x,y)\in \mathbb{R}^2 \mid x,y>0\}$ open in $\mathbb{R}^2$?

Posted: 30 Jun 2022 09:05 AM PDT

Let $U=\{(x,y)\in \mathbb{R}^2 \mid x,y>0\}$. Is $U$ open in $\mathbb{R}^2$?

My attempt: Let $(x,y)\in \mathbb{R}^2$. Choose $r=\frac{1}{2}(\text{min}\{x,y\})$. Since $x,y>0$, $r>0$. Consider the open ball $B_d((x,y),r)$ where $d$ is the standard Euclidean metric (as no metric is specified in the problem). Let $(p,q) \in B_d((x,y), r)$, then $d((x,y), (p,q))<r$. So, $$d((x,y), (p,q))<x \implies \sqrt{(x-p)^2+(y-q)^2} <x \implies (x-p)^2+(y-q)^2<x^2 \implies (x-p)^2<x^2 \implies |x-p|<|x|=x \implies x-p \leq |x-p|<x \implies p>0.$$ Similarly $q>0$. So, $(p,q) \in U$. Hence $B_d((x,y), r) \subset U$. So, $U$ is open in $\mathbb{R}^2$.

Is my proof correct?

Find maximum curvature of a power function (a curve without a vertex where rate of change $=0$)

Posted: 30 Jun 2022 09:08 AM PDT

I am trying to find the optimum solution for calculating and visualizing the critical point of an inverse power curve.

As an example, consider the curve of: $150x^{-0.4}$

link to image of curve

I modeled the function using https://www.geogebra.org/graphing (type in $150x^-0.4$ as the function). A power curve is unlike a polynomial because it does not have a sudden vertex where the rate of change $=0$. Therefore I need to use a different method of calculating the maximum or minimum (depending on orientation) for the curve.

Visual inspection of this curve shows that maximum curvature in the 'elbow' is likely somewhere between $10<x<60$. I used recommendations for finding maximum curvature based on protocol provided in other forum threads:

The answer I calculated as the point of maximum curvature is (50.7, 31.2). However, I am not confident about this answer because I calculated curvature at each point and the answer is between 16 and 18 based on my calculation of curvature in excel. I don't know how to validate it though.

table of curve values. note change at 16-18

I have a couple inquiries:

  • Is this the equation I should be using to find the apex or maximum of this curve?
  • What is recommended software or programming that would aid in quantifying and visualizing this maximum curvature. It would be nice to have software like what is seen 52 seconds into this video: https://youtu.be/wyPXbvsd9nI?t=52 (other examples here, but none of them have been satisfying after a quick test. I'm still exploring)

Derivative of a multivariable function defined with the inner product

Posted: 30 Jun 2022 09:16 AM PDT

Let $n\geq 1$ and let $f: \mathbb{R}^n \to \mathbb{R}^n$ be a $C^1$ function. We know that for a fixed $x=(x_1,....,x_n)$ in $\mathbb{R}^n$, the derivative of $f$ at $x$, denoted by $f'(x)$, is a continuous linear mapping : Since $$ f': \mathbb{R}^n \to \mathcal{L}(\mathbb{R}^n,\mathbb{R}^n).$$

In this particular case, we know that $\mathcal{L}(\mathbb{R}^n,\mathbb{R})\approx \mathbb{R}^{n\times n}$ so $f'(x)$ is a matrix.

Now, if we take a simple example of $g:\mathbb{R}^3\to \mathbb{R}^2$ given by $g(x,y,z) = <(a,b),f(x,y,z)>$ with $f(x,y,z)=(f_2(x,y,z),f_2(x,y,z))$. If we wanna calculate the following $$ g'(x,y,z),$$ Is this equivalent to $g'(x,y,z)= f'(x,y,z)(a,b)$ ?

Rewriting quadratic optimization problem with negative definite matrix

Posted: 30 Jun 2022 09:20 AM PDT

Consider the following problem:

Minimize $x(1-x) + y(1-y) + z(1-z)$ subject to: $$ a. 0\leq x, y, z \leq 1$$ $$ b. x+y+z = 1$$

If I plug in this problem in a standard quadratic optimizer, which minimizes $\displaystyle c^T \cdot \begin{pmatrix} x \\ y \\z \end{pmatrix} + \frac{1}{2}\cdot \begin{pmatrix} x&y&z \end{pmatrix} G \cdot \begin{pmatrix} x \\ y \\z \end{pmatrix} $, subject to constraints a) and b), I get the error message "the matrix G is not positive definite".

I'm aware that the minimum is obtained when exactly one of $x, y, z$ is equal to 1 ( and the others 0 ), but I need to translate this problem into a quadratic form with a positive definite matrix $G$ ( as part of a larger problem I am trying to solve, which contains more variables). Are there any suggestions on how to do so? Thanks a lot!

Time derivative of the blend of a pair of quaternion curves

Posted: 30 Jun 2022 09:16 AM PDT

I have two curves ${\bf q}_0(t), {\bf q}_1(t)$. Each curve maps time $t$ to a unit quaternion. Construction of these curves is not important here, although we do have the respective time derivatives for the curves given by $\dot{\bf q}_0(t), \dot{\bf q}_1(t)$. My goal is to find a unit quaternion and its time derivate of the two curves blended by some parameter $p \in [0, 1]$. Using a slerp for the blended unit quaternion gives ${\bf q} = {\bf q}_0 ({\bf q}^*_0 {\bf q}_1)^p$, where ${\bf q}^*$ is the quaternion conjugate, and ${\bf q}^p = e^{p \log({\bf q})}$. For the sake of this discussion, assume $p$ is not time dependent. It will be in the final solution but for now let's not be bothered by that.

My question is how to compute the time derivative $\dot{\bf q}$. Intuitively, it appears to be something of the form $\dot{\bf q} = ((1 - p)\dot{\bf q}_0{\bf q}^*_0 + p\dot{\bf q}_1 {\bf q}^*_1) {\bf q}$, which is linearly interpolated angular velocities times blend result. This formula obviously is correct for $p = 0$ and for $p = 1$. I'm trying to find some mathematical backing for the general case.

Naively, taking $\dot{\bf q} = \dot{\bf q}_0({\bf q}^*_0 {\bf q}_1)^p + {\bf q}_0 p ({\bf q}^*_0 {\bf q}_1)^{p - 1} (\dot{\bf q}^*_0 {\bf q}_1 + {\bf q}^*_0 \dot{\bf q}_1)$ can't be correct in the general case, since the product operator does not commute.

I could use some help in finding the proper time derivative. In particular, I would like to know whether there is a general formula for $\frac{d}{dt}{\bf q}(t)^p$, for constant $p \in [0, 1]$. Perhaps someone with a grasp of Lie groups/algebras could shed some light on this. Thanks for your help.

Solving the integral $\int_{0}^{1} \frac{s \ln{s} - t \ln{t}}{(s^{2} - t^{2})\sqrt{1-s^{2}}}\,\mathrm{d}s$

Posted: 30 Jun 2022 09:22 AM PDT

For $t \in [0,1]$, let

$$f(t) = \int_{0}^{1} \frac{s \ln{s} - t \ln{t}}{(s^{2} - t^{2})\sqrt{1-s^{2}}}\,\mathrm{d}s.$$

Is it possible to evaluate this integral to obtain an explicit expression for $f(t)$? Mathematica and Maple both fail to give an answer.

Writing steps of the Bolzano Weierstrass theorem in logical formulas

Posted: 30 Jun 2022 09:04 AM PDT

Let $(x_n)$ be a sequence of elements of an interval $[a,b]\subset\mathbb{R}$. Then there exists a point of accumulation $c$ of the sequence with $c\in[a,b]$.

The proof goes like this:

Let $I_1=[a,b]$. Let $x_{n_1}\in I_1$. Let $c_1$ be the midpoint of $I_1$. Then $c_1$ separates the interval into $2$ intervals, each of leant $b-a/2$. One of the two intervals must be such that $x_n$ lies in this interval for infinitely many $n$...

How can I translate the bolded part into logical formulas? I think it should be $$(\exists n\in\mathbb{N}_{\geq 1})(\forall m\in\mathbb{N}_{\geq n})(x_m\in[a,c])\lor(\exists n\in\mathbb{N}_{\geq 1})(\forall m\in\mathbb{N}_{\geq n})(x_m\in[c,b]).$$ Now let's say I want to prove the above disjunction by contradiction. The negation is $$(\forall n\in\mathbb{N}_{\geq 1})(\exists m\in\mathbb{N}_{\geq n})(x_m\not\in[a,c])\land(\forall n\in\mathbb{N}_{\geq 1})(\exists m\in\mathbb{N}_{\geq n})(x_m\not\in[c,b]).$$ But I am not able to derive a contradiction from this. Have I translated the bolded statement correctly?

Compound probability function and moment generating function

Posted: 30 Jun 2022 09:16 AM PDT

(Feller Vol.1, P.301) 4. Let $N$ have a Poisson distribution with mean $\lambda$, and let $N$ balls be placed randomly into $n$ cells. Show without calculation that the probability of finding exactly $m$ cells empty is ${n \choose m} e^{-\lambda m/n} [1- e^{-\lambda /n}]^{n-m}$.

Let $X_i$ be the number of balls in the $i$th cell. Then, $P(X_i =0)=\sum_j P(N=j)\frac{(n-1)^{j}}{n^j} = e^{-\lambda /n}$. So, the result follows from the binomial distribution with $e^{-\lambda/n}$.

  1. Continuation. Show that when a fixed number $r$ of balls is placed randomly into $n$ cells the probability of finding exactly $m$ cells empty equals the coefficient of $e^{-\lambda}\lambda^r/r!$ in the expression above. (a) Discuss the connection with moment generating function. (b) Use the result for the effortless derivation of ${n \choose m} \sum_{v=0}^{n-m} (-1)^v {n-m \choose v} (n-m-v)^r$.

I know that this probability is equal to ${n \choose m}{n-m-1 \choose r-1}/n^r$ since ${n-m-1 \choose r-1}$ represents the number of ways for $r$ balls to be placed in $n-m$ cells without making any cell empty. Then, I have the following generating function of this probability distribution $$\sum_{r=0}^\infty s^r P(N=r){n \choose m}{n-m-1 \choose r-1}/n^r = \sum_{r=0}^\infty s^r e^{-\lambda}\lambda^{r}/r!{n \choose m}{n-m-1 \choose r-1}/n^r.$$ I also know that The moment generating function looks like $\sum_{r=0}^\infty \frac{m_r}{r!} s^r$ for $m_r$ represents the $r$th moment.

I am not sure if this is correct because I can't see any connection between two generating functions. I would appreciate if you give some help.

Prove that every sequence of real numbers has at least one limit point

Posted: 30 Jun 2022 09:06 AM PDT

  1. Sequence is infinite and bounded.

    Let $A=\{x_n|n \in\mathbb{N}\}.$ Since $A$ is both bounded and infinite existence of limit point comes directly from BW theorem for sets

  2. Sequence is infinite and unbounded.

    Let $G$ be some neighbourhood of $+\infty$ (same applies for $-\infty$). For any $M\in\mathbb{R}, \exists n\in\mathbb{N}$ such that $x_n\in(M,+\infty)$ $\forall n\geq$ some $n_0$ thus there is a subsequence of $x_n$ that converges to infinity and so we can say that $+\infty$ is limit point of $x_n$

  3. Sequence is finite and bounded

    There is certain real $a$ such that $x_n=a$ for finite $n$.$\implies \exists x_{n_k}=a; \forall k\in\mathbb{N}\implies lim_{k\to\infty} x_{n_k} = a$ thus there is subsequence of $x_n$ that converges to some point ($a$) which is its limit point.

  4. Sequence cannot be finite and unbounded in $\mathbb{R}$

Please check my proof for any errors.

Simplify sop expression using Boolean algebra

Posted: 30 Jun 2022 09:03 AM PDT

how can I simplify this sop expression using Boolean algebra ? :

$A'BC'D'+A'BC'D+A'BCD'+A'BCD+AB'C'D+AB'CD'+ABC'D'+ABC'D$

I have to use Boolean algebra rules only (no K map); the answer should be this:

$A'B+BC'+AC'D+AB'CD'.$

Thanks!

Proving that every sequence has either an increasing or decreasing subsequence without Bolzano Weierstrass Theorem

Posted: 30 Jun 2022 09:08 AM PDT

Can we prove that every bounded sequence has either an increasing or decreasing (or both) subsequence (without first proving the Bolzano-Weierstrass Theorem)?

What are the differences between mathematical systems theory, Dynamical Systems, and Optimization and Control and how are they related to each other?

Posted: 30 Jun 2022 09:24 AM PDT

One of the areas of research in Systems and Controls group at Georgia Institute of Technology is Mathematical systems theory. It seems that the electrical engineering department at Georgia Institute of Technology is the only electrical engineering department that studies Mathematical systems theory.

On the other hand, arxiv archive has categories like Dynamical Systems, Systems and Control and Optimization and Control; some of the authors of the papers in these categories are from electrical engineering departments.

What are the differences between mathematical systems theory, Dynamical Systems, Systems and Control, and Optimization and Control and how are they related to each other? what universities are well known for their research in these areas? are these applied math?

What is the total food grain production in 2002 given the following conditions?

Posted: 30 Jun 2022 09:09 AM PDT

The production of rice in the year $2001$ was $1000$ tonnes which was $25$% of the total food grains production in that year.In the next year ,if the production of rice is decreased by $4$%. and the production of rice as the percentage of total food grain production increased by $5$ percentage points.What is the total food grain production in $2002$?

MyApproach

Production of rice in the year $2001$ was $1000$

Percentage with respect of total food grain in $2001$ was $25$%.

Production of rice in the year $2002$ was $960$(as it is decreased by $4%$)

Percentage with respect of total food grain in $2002$ was $30$%.

I am not able to relate the equations and calculate the total food grain production in $2002$.

Can Anyone guide me how to solve the problem?

Solve $x^{3}-3x=\sqrt{x+2}$

Posted: 30 Jun 2022 09:00 AM PDT

Solve for real $x$

$$x^{3}-3x=\sqrt{x+2}$$

By inspection, $x=2$ is a root of this equation. So, I squared both sides and divided the six degree polynomial obtained by $x-2$. Then I got a quintic which I couldn't solve despite applying rational root theorem and substitutions. I believe that there must be some nice method to solve this which I can't think about. Please help. Thanks!

No comments:

Post a Comment